2025-09-19 10:43:31.264682 | Job console starting 2025-09-19 10:43:31.301281 | Updating git repos 2025-09-19 10:43:31.369808 | Cloning repos into workspace 2025-09-19 10:43:31.662080 | Restoring repo states 2025-09-19 10:43:31.686271 | Merging changes 2025-09-19 10:43:32.205905 | Checking out repos 2025-09-19 10:43:32.511116 | Preparing playbooks 2025-09-19 10:43:33.227463 | Running Ansible setup 2025-09-19 10:43:37.226070 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-19 10:43:37.990676 | 2025-09-19 10:43:37.990874 | PLAY [Base pre] 2025-09-19 10:43:38.007913 | 2025-09-19 10:43:38.008050 | TASK [Setup log path fact] 2025-09-19 10:43:38.038225 | orchestrator | ok 2025-09-19 10:43:38.055714 | 2025-09-19 10:43:38.055857 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-19 10:43:38.100439 | orchestrator | ok 2025-09-19 10:43:38.116332 | 2025-09-19 10:43:38.116448 | TASK [emit-job-header : Print job information] 2025-09-19 10:43:38.170579 | # Job Information 2025-09-19 10:43:38.170806 | Ansible Version: 2.16.14 2025-09-19 10:43:38.170887 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-09-19 10:43:38.170938 | Pipeline: label 2025-09-19 10:43:38.170974 | Executor: 521e9411259a 2025-09-19 10:43:38.171006 | Triggered by: https://github.com/osism/testbed/pull/2766 2025-09-19 10:43:38.171040 | Event ID: 76be4b30-9545-11f0-9c2a-5ce2e4dce44b 2025-09-19 10:43:38.179650 | 2025-09-19 10:43:38.179774 | LOOP [emit-job-header : Print node information] 2025-09-19 10:43:38.308006 | orchestrator | ok: 2025-09-19 10:43:38.308295 | orchestrator | # Node Information 2025-09-19 10:43:38.308362 | orchestrator | Inventory Hostname: orchestrator 2025-09-19 10:43:38.308413 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-19 10:43:38.308458 | orchestrator | Username: zuul-testbed01 2025-09-19 10:43:38.308500 | orchestrator | Distro: Debian 12.12 2025-09-19 10:43:38.308547 | orchestrator | Provider: static-testbed 2025-09-19 10:43:38.308590 | orchestrator | Region: 2025-09-19 10:43:38.308633 | orchestrator | Label: testbed-orchestrator 2025-09-19 10:43:38.308675 | orchestrator | Product Name: OpenStack Nova 2025-09-19 10:43:38.308715 | orchestrator | Interface IP: 81.163.193.140 2025-09-19 10:43:38.323992 | 2025-09-19 10:43:38.324123 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-19 10:43:38.813988 | orchestrator -> localhost | changed 2025-09-19 10:43:38.833193 | 2025-09-19 10:43:38.833382 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-19 10:43:39.869577 | orchestrator -> localhost | changed 2025-09-19 10:43:39.884832 | 2025-09-19 10:43:39.884966 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-19 10:43:40.151201 | orchestrator -> localhost | ok 2025-09-19 10:43:40.158409 | 2025-09-19 10:43:40.158526 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-19 10:43:40.187840 | orchestrator | ok 2025-09-19 10:43:40.204321 | orchestrator | included: /var/lib/zuul/builds/b1291048054043b3b0db75d23259f197/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-19 10:43:40.212489 | 2025-09-19 10:43:40.212588 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-19 10:43:43.460201 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-19 10:43:43.460498 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/b1291048054043b3b0db75d23259f197/work/b1291048054043b3b0db75d23259f197_id_rsa 2025-09-19 10:43:43.460551 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/b1291048054043b3b0db75d23259f197/work/b1291048054043b3b0db75d23259f197_id_rsa.pub 2025-09-19 10:43:43.460588 | orchestrator -> localhost | The key fingerprint is: 2025-09-19 10:43:43.460626 | orchestrator -> localhost | SHA256:HP96RFAGVJH503GFYNjvs9jVH0dT/loh5yCALPNhHJc zuul-build-sshkey 2025-09-19 10:43:43.460655 | orchestrator -> localhost | The key's randomart image is: 2025-09-19 10:43:43.460697 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-19 10:43:43.460727 | orchestrator -> localhost | | o.oo+*B= .o| 2025-09-19 10:43:43.460756 | orchestrator -> localhost | | o *.Eoo+ ..o| 2025-09-19 10:43:43.460783 | orchestrator -> localhost | | = o .. o o+| 2025-09-19 10:43:43.460810 | orchestrator -> localhost | | o o ..o+++| 2025-09-19 10:43:43.460838 | orchestrator -> localhost | | S ....=o=| 2025-09-19 10:43:43.460875 | orchestrator -> localhost | | .. oo*| 2025-09-19 10:43:43.460905 | orchestrator -> localhost | | ..o *+| 2025-09-19 10:43:43.460933 | orchestrator -> localhost | | .o + .| 2025-09-19 10:43:43.460960 | orchestrator -> localhost | | .. | 2025-09-19 10:43:43.460987 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-19 10:43:43.461056 | orchestrator -> localhost | ok: Runtime: 0:00:02.756758 2025-09-19 10:43:43.470502 | 2025-09-19 10:43:43.470628 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-19 10:43:43.505912 | orchestrator | ok 2025-09-19 10:43:43.518970 | orchestrator | included: /var/lib/zuul/builds/b1291048054043b3b0db75d23259f197/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-19 10:43:43.528487 | 2025-09-19 10:43:43.528590 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-19 10:43:43.552004 | orchestrator | skipping: Conditional result was False 2025-09-19 10:43:43.560637 | 2025-09-19 10:43:43.560742 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-19 10:43:44.136440 | orchestrator | changed 2025-09-19 10:43:44.147065 | 2025-09-19 10:43:44.147259 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-19 10:43:44.429224 | orchestrator | ok 2025-09-19 10:43:44.438132 | 2025-09-19 10:43:44.438313 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-19 10:43:44.845911 | orchestrator | ok 2025-09-19 10:43:44.854736 | 2025-09-19 10:43:44.854888 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-19 10:43:45.257287 | orchestrator | ok 2025-09-19 10:43:45.266289 | 2025-09-19 10:43:45.266418 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-19 10:43:45.292391 | orchestrator | skipping: Conditional result was False 2025-09-19 10:43:45.307305 | 2025-09-19 10:43:45.307460 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-19 10:43:45.736968 | orchestrator -> localhost | changed 2025-09-19 10:43:45.751382 | 2025-09-19 10:43:45.751522 | TASK [add-build-sshkey : Add back temp key] 2025-09-19 10:43:46.089489 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/b1291048054043b3b0db75d23259f197/work/b1291048054043b3b0db75d23259f197_id_rsa (zuul-build-sshkey) 2025-09-19 10:43:46.089808 | orchestrator -> localhost | ok: Runtime: 0:00:00.013277 2025-09-19 10:43:46.100557 | 2025-09-19 10:43:46.100704 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-19 10:43:46.506372 | orchestrator | ok 2025-09-19 10:43:46.512555 | 2025-09-19 10:43:46.512670 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-19 10:43:46.546273 | orchestrator | skipping: Conditional result was False 2025-09-19 10:43:46.594008 | 2025-09-19 10:43:46.594130 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-19 10:43:46.996720 | orchestrator | ok 2025-09-19 10:43:47.012605 | 2025-09-19 10:43:47.012746 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-19 10:43:47.055210 | orchestrator | ok 2025-09-19 10:43:47.065046 | 2025-09-19 10:43:47.065171 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-19 10:43:47.361759 | orchestrator -> localhost | ok 2025-09-19 10:43:47.369382 | 2025-09-19 10:43:47.369495 | TASK [validate-host : Collect information about the host] 2025-09-19 10:43:48.599068 | orchestrator | ok 2025-09-19 10:43:48.616827 | 2025-09-19 10:43:48.616959 | TASK [validate-host : Sanitize hostname] 2025-09-19 10:43:48.693332 | orchestrator | ok 2025-09-19 10:43:48.701676 | 2025-09-19 10:43:48.701803 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-19 10:43:49.265174 | orchestrator -> localhost | changed 2025-09-19 10:43:49.280972 | 2025-09-19 10:43:49.281195 | TASK [validate-host : Collect information about zuul worker] 2025-09-19 10:43:49.737079 | orchestrator | ok 2025-09-19 10:43:49.746106 | 2025-09-19 10:43:49.746278 | TASK [validate-host : Write out all zuul information for each host] 2025-09-19 10:43:50.288323 | orchestrator -> localhost | changed 2025-09-19 10:43:50.308788 | 2025-09-19 10:43:50.308936 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-19 10:43:50.588281 | orchestrator | ok 2025-09-19 10:43:50.597909 | 2025-09-19 10:43:50.598044 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-19 10:44:27.792554 | orchestrator | changed: 2025-09-19 10:44:27.792809 | orchestrator | .d..t...... src/ 2025-09-19 10:44:27.792854 | orchestrator | .d..t...... src/github.com/ 2025-09-19 10:44:27.792886 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-19 10:44:27.792914 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-19 10:44:27.792941 | orchestrator | RedHat.yml 2025-09-19 10:44:27.809358 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-19 10:44:27.809375 | orchestrator | RedHat.yml 2025-09-19 10:44:27.809428 | orchestrator | = 2.2.0"... 2025-09-19 10:44:38.632959 | orchestrator | 10:44:38.632 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-19 10:44:38.664502 | orchestrator | 10:44:38.664 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-09-19 10:44:38.835389 | orchestrator | 10:44:38.835 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-19 10:44:39.314297 | orchestrator | 10:44:39.314 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-19 10:44:39.387630 | orchestrator | 10:44:39.387 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-19 10:44:39.818576 | orchestrator | 10:44:39.818 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-19 10:44:39.898201 | orchestrator | 10:44:39.897 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-19 10:44:40.675550 | orchestrator | 10:44:40.675 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-19 10:44:40.675777 | orchestrator | 10:44:40.675 STDOUT terraform: Providers are signed by their developers. 2025-09-19 10:44:40.675789 | orchestrator | 10:44:40.675 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-19 10:44:40.675795 | orchestrator | 10:44:40.675 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-19 10:44:40.676036 | orchestrator | 10:44:40.675 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-19 10:44:40.676050 | orchestrator | 10:44:40.675 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-19 10:44:40.676058 | orchestrator | 10:44:40.675 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-19 10:44:40.676062 | orchestrator | 10:44:40.675 STDOUT terraform: you run "tofu init" in the future. 2025-09-19 10:44:40.676598 | orchestrator | 10:44:40.676 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-19 10:44:40.676892 | orchestrator | 10:44:40.676 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-19 10:44:40.676899 | orchestrator | 10:44:40.676 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-19 10:44:40.676904 | orchestrator | 10:44:40.676 STDOUT terraform: should now work. 2025-09-19 10:44:40.676908 | orchestrator | 10:44:40.676 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-19 10:44:40.676912 | orchestrator | 10:44:40.676 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-19 10:44:40.676918 | orchestrator | 10:44:40.676 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-19 10:44:40.781241 | orchestrator | 10:44:40.779 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-09-19 10:44:40.781364 | orchestrator | 10:44:40.779 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-19 10:44:41.036089 | orchestrator | 10:44:41.034 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-19 10:44:41.036143 | orchestrator | 10:44:41.034 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-19 10:44:41.036152 | orchestrator | 10:44:41.034 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-19 10:44:41.036158 | orchestrator | 10:44:41.034 STDOUT terraform: for this configuration. 2025-09-19 10:44:41.183602 | orchestrator | 10:44:41.183 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-09-19 10:44:41.183684 | orchestrator | 10:44:41.183 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-19 10:44:41.289619 | orchestrator | 10:44:41.289 STDOUT terraform: ci.auto.tfvars 2025-09-19 10:44:41.417141 | orchestrator | 10:44:41.416 STDOUT terraform: default_custom.tf 2025-09-19 10:44:41.523725 | orchestrator | 10:44:41.523 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-09-19 10:44:42.429710 | orchestrator | 10:44:42.428 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-19 10:44:42.968393 | orchestrator | 10:44:42.968 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-19 10:44:43.190210 | orchestrator | 10:44:43.189 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-19 10:44:43.190308 | orchestrator | 10:44:43.189 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-19 10:44:43.190323 | orchestrator | 10:44:43.189 STDOUT terraform:  + create 2025-09-19 10:44:43.190336 | orchestrator | 10:44:43.189 STDOUT terraform:  <= read (data resources) 2025-09-19 10:44:43.190347 | orchestrator | 10:44:43.189 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-19 10:44:43.190359 | orchestrator | 10:44:43.189 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-19 10:44:43.190370 | orchestrator | 10:44:43.189 STDOUT terraform:  # (config refers to values not yet known) 2025-09-19 10:44:43.190382 | orchestrator | 10:44:43.190 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-19 10:44:43.190393 | orchestrator | 10:44:43.190 STDOUT terraform:  + checksum = (known after apply) 2025-09-19 10:44:43.190404 | orchestrator | 10:44:43.190 STDOUT terraform:  + created_at = (known after apply) 2025-09-19 10:44:43.190427 | orchestrator | 10:44:43.190 STDOUT terraform:  + file = (known after apply) 2025-09-19 10:44:43.190439 | orchestrator | 10:44:43.190 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.190449 | orchestrator | 10:44:43.190 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.190512 | orchestrator | 10:44:43.190 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-19 10:44:43.190525 | orchestrator | 10:44:43.190 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-19 10:44:43.190537 | orchestrator | 10:44:43.190 STDOUT terraform:  + most_recent = true 2025-09-19 10:44:43.190547 | orchestrator | 10:44:43.190 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:43.190558 | orchestrator | 10:44:43.190 STDOUT terraform:  + protected = (known after apply) 2025-09-19 10:44:43.190569 | orchestrator | 10:44:43.190 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.190580 | orchestrator | 10:44:43.190 STDOUT terraform:  + schema = (known after apply) 2025-09-19 10:44:43.190592 | orchestrator | 10:44:43.190 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-19 10:44:43.190602 | orchestrator | 10:44:43.190 STDOUT terraform:  + tags = (known after apply) 2025-09-19 10:44:43.190617 | orchestrator | 10:44:43.190 STDOUT terraform:  + updated_at = (known after apply) 2025-09-19 10:44:43.190629 | orchestrator | 10:44:43.190 STDOUT terraform:  } 2025-09-19 10:44:43.190645 | orchestrator | 10:44:43.190 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-19 10:44:43.190660 | orchestrator | 10:44:43.190 STDOUT terraform:  # (config refers to values not yet known) 2025-09-19 10:44:43.190674 | orchestrator | 10:44:43.190 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-19 10:44:43.190716 | orchestrator | 10:44:43.190 STDOUT terraform:  + checksum = (known after apply) 2025-09-19 10:44:43.190733 | orchestrator | 10:44:43.190 STDOUT terraform:  + created_at = (known after apply) 2025-09-19 10:44:43.190747 | orchestrator | 10:44:43.190 STDOUT terraform:  + file = (known after apply) 2025-09-19 10:44:43.190785 | orchestrator | 10:44:43.190 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.190801 | orchestrator | 10:44:43.190 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.190834 | orchestrator | 10:44:43.190 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-19 10:44:43.190849 | orchestrator | 10:44:43.190 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-19 10:44:43.190888 | orchestrator | 10:44:43.190 STDOUT terraform:  + most_recent = true 2025-09-19 10:44:43.190904 | orchestrator | 10:44:43.190 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:43.190918 | orchestrator | 10:44:43.190 STDOUT terraform:  + protected = (known after apply) 2025-09-19 10:44:43.190947 | orchestrator | 10:44:43.190 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.190962 | orchestrator | 10:44:43.190 STDOUT terraform:  + schema = (known after apply) 2025-09-19 10:44:43.191000 | orchestrator | 10:44:43.190 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-19 10:44:43.191043 | orchestrator | 10:44:43.190 STDOUT terraform:  + tags = (known after apply) 2025-09-19 10:44:43.191058 | orchestrator | 10:44:43.191 STDOUT terraform:  + updated_at = (known after apply) 2025-09-19 10:44:43.191069 | orchestrator | 10:44:43.191 STDOUT terraform:  } 2025-09-19 10:44:43.191123 | orchestrator | 10:44:43.191 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-19 10:44:43.191149 | orchestrator | 10:44:43.191 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-19 10:44:43.191185 | orchestrator | 10:44:43.191 STDOUT terraform:  + content = (known after apply) 2025-09-19 10:44:43.191216 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 10:44:43.191246 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 10:44:43.191296 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 10:44:43.191311 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 10:44:43.191356 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 10:44:43.191372 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 10:44:43.191386 | orchestrator | 10:44:43.191 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 10:44:43.191444 | orchestrator | 10:44:43.191 STDOUT terraform:  + file_permission = "0644" 2025-09-19 10:44:43.191457 | orchestrator | 10:44:43.191 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-19 10:44:43.191472 | orchestrator | 10:44:43.191 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.191486 | orchestrator | 10:44:43.191 STDOUT terraform:  } 2025-09-19 10:44:43.191501 | orchestrator | 10:44:43.191 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-19 10:44:43.191557 | orchestrator | 10:44:43.191 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-19 10:44:43.191569 | orchestrator | 10:44:43.191 STDOUT terraform:  + content = (known after apply) 2025-09-19 10:44:43.191584 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 10:44:43.191637 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 10:44:43.191653 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 10:44:43.191697 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 10:44:43.191712 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 10:44:43.191757 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 10:44:43.191769 | orchestrator | 10:44:43.191 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 10:44:43.191784 | orchestrator | 10:44:43.191 STDOUT terraform:  + file_permission = "0644" 2025-09-19 10:44:43.191839 | orchestrator | 10:44:43.191 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-19 10:44:43.191851 | orchestrator | 10:44:43.191 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.191866 | orchestrator | 10:44:43.191 STDOUT terraform:  } 2025-09-19 10:44:43.191884 | orchestrator | 10:44:43.191 STDOUT terraform:  # local_file.inventory will be created 2025-09-19 10:44:43.191899 | orchestrator | 10:44:43.191 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-19 10:44:43.191913 | orchestrator | 10:44:43.191 STDOUT terraform:  + content = (known after apply) 2025-09-19 10:44:43.191977 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 10:44:43.191990 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 10:44:43.192033 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 10:44:43.192066 | orchestrator | 10:44:43.191 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 10:44:43.192112 | orchestrator | 10:44:43.192 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 10:44:43.192128 | orchestrator | 10:44:43.192 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 10:44:43.192143 | orchestrator | 10:44:43.192 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 10:44:43.192157 | orchestrator | 10:44:43.192 STDOUT terraform:  + file_permission = "0644" 2025-09-19 10:44:43.192201 | orchestrator | 10:44:43.192 STDOUT terraform:  + filename = "inventory.ci" 2025-09-19 10:44:43.192217 | orchestrator | 10:44:43.192 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.192231 | orchestrator | 10:44:43.192 STDOUT terraform:  } 2025-09-19 10:44:43.192246 | orchestrator | 10:44:43.192 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-19 10:44:43.192291 | orchestrator | 10:44:43.192 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-19 10:44:43.192308 | orchestrator | 10:44:43.192 STDOUT terraform:  + content = (sensitive value) 2025-09-19 10:44:43.192336 | orchestrator | 10:44:43.192 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 10:44:43.192380 | orchestrator | 10:44:43.192 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 10:44:43.192396 | orchestrator | 10:44:43.192 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 10:44:43.192440 | orchestrator | 10:44:43.192 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 10:44:43.192455 | orchestrator | 10:44:43.192 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 10:44:43.192511 | orchestrator | 10:44:43.192 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 10:44:43.192523 | orchestrator | 10:44:43.192 STDOUT terraform:  + directory_permission = "0700" 2025-09-19 10:44:43.192538 | orchestrator | 10:44:43.192 STDOUT terraform:  + file_permission = "0600" 2025-09-19 10:44:43.192552 | orchestrator | 10:44:43.192 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-19 10:44:43.192596 | orchestrator | 10:44:43.192 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.192609 | orchestrator | 10:44:43.192 STDOUT terraform:  } 2025-09-19 10:44:43.192623 | orchestrator | 10:44:43.192 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-19 10:44:43.192637 | orchestrator | 10:44:43.192 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-19 10:44:43.192651 | orchestrator | 10:44:43.192 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.192665 | orchestrator | 10:44:43.192 STDOUT terraform:  } 2025-09-19 10:44:43.192723 | orchestrator | 10:44:43.192 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-19 10:44:43.192753 | orchestrator | 10:44:43.192 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-19 10:44:43.192780 | orchestrator | 10:44:43.192 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.192796 | orchestrator | 10:44:43.192 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.192841 | orchestrator | 10:44:43.192 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.192857 | orchestrator | 10:44:43.192 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.192913 | orchestrator | 10:44:43.192 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.192929 | orchestrator | 10:44:43.192 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-19 10:44:43.193023 | orchestrator | 10:44:43.192 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.193038 | orchestrator | 10:44:43.192 STDOUT terraform:  + size = 80 2025-09-19 10:44:43.193049 | orchestrator | 10:44:43.192 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.193064 | orchestrator | 10:44:43.192 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.193075 | orchestrator | 10:44:43.193 STDOUT terraform:  } 2025-09-19 10:44:43.193090 | orchestrator | 10:44:43.193 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-19 10:44:43.193135 | orchestrator | 10:44:43.193 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 10:44:43.193151 | orchestrator | 10:44:43.193 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.193165 | orchestrator | 10:44:43.193 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.193219 | orchestrator | 10:44:43.193 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.193236 | orchestrator | 10:44:43.193 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.193279 | orchestrator | 10:44:43.193 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.193333 | orchestrator | 10:44:43.193 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-19 10:44:43.193346 | orchestrator | 10:44:43.193 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.193360 | orchestrator | 10:44:43.193 STDOUT terraform:  + size = 80 2025-09-19 10:44:43.193374 | orchestrator | 10:44:43.193 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.193389 | orchestrator | 10:44:43.193 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.193402 | orchestrator | 10:44:43.193 STDOUT terraform:  } 2025-09-19 10:44:43.193456 | orchestrator | 10:44:43.193 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-19 10:44:43.193511 | orchestrator | 10:44:43.193 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 10:44:43.193524 | orchestrator | 10:44:43.193 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.193550 | orchestrator | 10:44:43.193 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.193565 | orchestrator | 10:44:43.193 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.193607 | orchestrator | 10:44:43.193 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.193639 | orchestrator | 10:44:43.193 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.193691 | orchestrator | 10:44:43.193 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-19 10:44:43.193708 | orchestrator | 10:44:43.193 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.193722 | orchestrator | 10:44:43.193 STDOUT terraform:  + size = 80 2025-09-19 10:44:43.193752 | orchestrator | 10:44:43.193 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.193767 | orchestrator | 10:44:43.193 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.193782 | orchestrator | 10:44:43.193 STDOUT terraform:  } 2025-09-19 10:44:43.193946 | orchestrator | 10:44:43.193 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-19 10:44:43.193964 | orchestrator | 10:44:43.193 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 10:44:43.193979 | orchestrator | 10:44:43.193 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.193996 | orchestrator | 10:44:43.193 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.194087 | orchestrator | 10:44:43.193 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.194102 | orchestrator | 10:44:43.194 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.194117 | orchestrator | 10:44:43.194 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.194170 | orchestrator | 10:44:43.194 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-19 10:44:43.194218 | orchestrator | 10:44:43.194 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.194247 | orchestrator | 10:44:43.194 STDOUT terraform:  + size = 80 2025-09-19 10:44:43.194262 | orchestrator | 10:44:43.194 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.194307 | orchestrator | 10:44:43.194 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.194320 | orchestrator | 10:44:43.194 STDOUT terraform:  } 2025-09-19 10:44:43.194439 | orchestrator | 10:44:43.194 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-19 10:44:43.194514 | orchestrator | 10:44:43.194 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 10:44:43.194551 | orchestrator | 10:44:43.194 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.194608 | orchestrator | 10:44:43.194 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.194663 | orchestrator | 10:44:43.194 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.194698 | orchestrator | 10:44:43.194 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.194747 | orchestrator | 10:44:43.194 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.194790 | orchestrator | 10:44:43.194 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-19 10:44:43.194837 | orchestrator | 10:44:43.194 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.194852 | orchestrator | 10:44:43.194 STDOUT terraform:  + size = 80 2025-09-19 10:44:43.194893 | orchestrator | 10:44:43.194 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.194907 | orchestrator | 10:44:43.194 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.194920 | orchestrator | 10:44:43.194 STDOUT terraform:  } 2025-09-19 10:44:43.194981 | orchestrator | 10:44:43.194 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-19 10:44:43.195067 | orchestrator | 10:44:43.194 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 10:44:43.195096 | orchestrator | 10:44:43.195 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.195132 | orchestrator | 10:44:43.195 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.195167 | orchestrator | 10:44:43.195 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.195228 | orchestrator | 10:44:43.195 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.195263 | orchestrator | 10:44:43.195 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.195333 | orchestrator | 10:44:43.195 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-19 10:44:43.195383 | orchestrator | 10:44:43.195 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.195397 | orchestrator | 10:44:43.195 STDOUT terraform:  + size = 80 2025-09-19 10:44:43.195467 | orchestrator | 10:44:43.195 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.195497 | orchestrator | 10:44:43.195 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.195511 | orchestrator | 10:44:43.195 STDOUT terraform:  } 2025-09-19 10:44:43.195589 | orchestrator | 10:44:43.195 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-19 10:44:43.195650 | orchestrator | 10:44:43.195 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 10:44:43.195703 | orchestrator | 10:44:43.195 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.195716 | orchestrator | 10:44:43.195 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.195771 | orchestrator | 10:44:43.195 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.195806 | orchestrator | 10:44:43.195 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.195854 | orchestrator | 10:44:43.195 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.195897 | orchestrator | 10:44:43.195 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-19 10:44:43.195944 | orchestrator | 10:44:43.195 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.195958 | orchestrator | 10:44:43.195 STDOUT terraform:  + size = 80 2025-09-19 10:44:43.195979 | orchestrator | 10:44:43.195 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.196035 | orchestrator | 10:44:43.195 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.196051 | orchestrator | 10:44:43.196 STDOUT terraform:  } 2025-09-19 10:44:43.196107 | orchestrator | 10:44:43.196 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-19 10:44:43.196162 | orchestrator | 10:44:43.196 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:43.196202 | orchestrator | 10:44:43.196 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.196280 | orchestrator | 10:44:43.196 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.196726 | orchestrator | 10:44:43.196 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.197102 | orchestrator | 10:44:43.196 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.197275 | orchestrator | 10:44:43.197 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-19 10:44:43.197842 | orchestrator | 10:44:43.197 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.198187 | orchestrator | 10:44:43.197 STDOUT terraform:  + size = 20 2025-09-19 10:44:43.198430 | orchestrator | 10:44:43.198 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.198618 | orchestrator | 10:44:43.198 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.198751 | orchestrator | 10:44:43.198 STDOUT terraform:  } 2025-09-19 10:44:43.198931 | orchestrator | 10:44:43.198 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-19 10:44:43.198970 | orchestrator | 10:44:43.198 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:43.199021 | orchestrator | 10:44:43.198 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.199034 | orchestrator | 10:44:43.198 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.201167 | orchestrator | 10:44:43.199 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.201258 | orchestrator | 10:44:43.201 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.201271 | orchestrator | 10:44:43.201 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-19 10:44:43.201372 | orchestrator | 10:44:43.201 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.201393 | orchestrator | 10:44:43.201 STDOUT terraform:  + size = 20 2025-09-19 10:44:43.201481 | orchestrator | 10:44:43.201 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.201525 | orchestrator | 10:44:43.201 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.201539 | orchestrator | 10:44:43.201 STDOUT terraform:  } 2025-09-19 10:44:43.201692 | orchestrator | 10:44:43.201 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-19 10:44:43.201825 | orchestrator | 10:44:43.201 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:43.201896 | orchestrator | 10:44:43.201 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.201966 | orchestrator | 10:44:43.201 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.202108 | orchestrator | 10:44:43.201 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.202204 | orchestrator | 10:44:43.202 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.206314 | orchestrator | 10:44:43.202 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-19 10:44:43.206544 | orchestrator | 10:44:43.206 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.206645 | orchestrator | 10:44:43.206 STDOUT terraform:  + size = 20 2025-09-19 10:44:43.206758 | orchestrator | 10:44:43.206 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.206854 | orchestrator | 10:44:43.206 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.206919 | orchestrator | 10:44:43.206 STDOUT terraform:  } 2025-09-19 10:44:43.207117 | orchestrator | 10:44:43.206 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-19 10:44:43.207268 | orchestrator | 10:44:43.207 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:43.207391 | orchestrator | 10:44:43.207 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.207484 | orchestrator | 10:44:43.207 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.207614 | orchestrator | 10:44:43.207 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.207742 | orchestrator | 10:44:43.207 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.207875 | orchestrator | 10:44:43.207 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-19 10:44:43.208000 | orchestrator | 10:44:43.207 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.208104 | orchestrator | 10:44:43.208 STDOUT terraform:  + size = 20 2025-09-19 10:44:43.208193 | orchestrator | 10:44:43.208 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.208284 | orchestrator | 10:44:43.208 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.208345 | orchestrator | 10:44:43.208 STDOUT terraform:  } 2025-09-19 10:44:43.208490 | orchestrator | 10:44:43.208 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-19 10:44:43.208630 | orchestrator | 10:44:43.208 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:43.208747 | orchestrator | 10:44:43.208 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.208835 | orchestrator | 10:44:43.208 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.208958 | orchestrator | 10:44:43.208 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.209095 | orchestrator | 10:44:43.208 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.209221 | orchestrator | 10:44:43.209 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-19 10:44:43.209337 | orchestrator | 10:44:43.209 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.209430 | orchestrator | 10:44:43.209 STDOUT terraform:  + size = 20 2025-09-19 10:44:43.209515 | orchestrator | 10:44:43.209 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.209616 | orchestrator | 10:44:43.209 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.209673 | orchestrator | 10:44:43.209 STDOUT terraform:  } 2025-09-19 10:44:43.209813 | orchestrator | 10:44:43.209 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-19 10:44:43.209956 | orchestrator | 10:44:43.209 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:43.210136 | orchestrator | 10:44:43.209 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.210245 | orchestrator | 10:44:43.210 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.211425 | orchestrator | 10:44:43.210 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.211448 | orchestrator | 10:44:43.210 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.211459 | orchestrator | 10:44:43.210 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-19 10:44:43.211470 | orchestrator | 10:44:43.210 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.211491 | orchestrator | 10:44:43.210 STDOUT terraform:  + size = 20 2025-09-19 10:44:43.211503 | orchestrator | 10:44:43.210 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.211514 | orchestrator | 10:44:43.210 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.211526 | orchestrator | 10:44:43.210 STDOUT terraform:  } 2025-09-19 10:44:43.211537 | orchestrator | 10:44:43.210 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-19 10:44:43.211548 | orchestrator | 10:44:43.210 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:43.211560 | orchestrator | 10:44:43.211 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.211571 | orchestrator | 10:44:43.211 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.211582 | orchestrator | 10:44:43.211 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.211637 | orchestrator | 10:44:43.211 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.211768 | orchestrator | 10:44:43.211 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-19 10:44:43.211889 | orchestrator | 10:44:43.211 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.211968 | orchestrator | 10:44:43.211 STDOUT terraform:  + size = 20 2025-09-19 10:44:43.212078 | orchestrator | 10:44:43.211 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.212174 | orchestrator | 10:44:43.212 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.212242 | orchestrator | 10:44:43.212 STDOUT terraform:  } 2025-09-19 10:44:43.212385 | orchestrator | 10:44:43.212 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-19 10:44:43.212524 | orchestrator | 10:44:43.212 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:43.212658 | orchestrator | 10:44:43.212 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.212746 | orchestrator | 10:44:43.212 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.212865 | orchestrator | 10:44:43.212 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.213067 | orchestrator | 10:44:43.212 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.218183 | orchestrator | 10:44:43.213 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-19 10:44:43.218242 | orchestrator | 10:44:43.218 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.218252 | orchestrator | 10:44:43.218 STDOUT terraform:  + size = 20 2025-09-19 10:44:43.218260 | orchestrator | 10:44:43.218 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.218641 | orchestrator | 10:44:43.218 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.218657 | orchestrator | 10:44:43.218 STDOUT terraform:  } 2025-09-19 10:44:43.218715 | orchestrator | 10:44:43.218 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-19 10:44:43.218759 | orchestrator | 10:44:43.218 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 10:44:43.218799 | orchestrator | 10:44:43.218 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 10:44:43.218825 | orchestrator | 10:44:43.218 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.218868 | orchestrator | 10:44:43.218 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.218904 | orchestrator | 10:44:43.218 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 10:44:43.218946 | orchestrator | 10:44:43.218 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-19 10:44:43.218981 | orchestrator | 10:44:43.218 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.219022 | orchestrator | 10:44:43.218 STDOUT terraform:  + size = 20 2025-09-19 10:44:43.219049 | orchestrator | 10:44:43.219 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 10:44:43.219073 | orchestrator | 10:44:43.219 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 10:44:43.219084 | orchestrator | 10:44:43.219 STDOUT terraform:  } 2025-09-19 10:44:43.219137 | orchestrator | 10:44:43.219 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-19 10:44:43.219176 | orchestrator | 10:44:43.219 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-19 10:44:43.219212 | orchestrator | 10:44:43.219 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:43.219252 | orchestrator | 10:44:43.219 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:43.219291 | orchestrator | 10:44:43.219 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:43.219330 | orchestrator | 10:44:43.219 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.219354 | orchestrator | 10:44:43.219 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.219376 | orchestrator | 10:44:43.219 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:43.219410 | orchestrator | 10:44:43.219 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:43.219448 | orchestrator | 10:44:43.219 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:43.219477 | orchestrator | 10:44:43.219 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-19 10:44:43.219502 | orchestrator | 10:44:43.219 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:43.219539 | orchestrator | 10:44:43.219 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:43.219578 | orchestrator | 10:44:43.219 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.219615 | orchestrator | 10:44:43.219 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.219651 | orchestrator | 10:44:43.219 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:43.219680 | orchestrator | 10:44:43.219 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:43.219713 | orchestrator | 10:44:43.219 STDOUT terraform:  + name = "testbed-manager" 2025-09-19 10:44:43.219741 | orchestrator | 10:44:43.219 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:43.219777 | orchestrator | 10:44:43.219 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.219814 | orchestrator | 10:44:43.219 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:43.219838 | orchestrator | 10:44:43.219 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:43.219877 | orchestrator | 10:44:43.219 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:43.219908 | orchestrator | 10:44:43.219 STDOUT terraform:  + user_data = (sensitive value) 2025-09-19 10:44:43.219920 | orchestrator | 10:44:43.219 STDOUT terraform:  + block_device { 2025-09-19 10:44:43.219950 | orchestrator | 10:44:43.219 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:43.219981 | orchestrator | 10:44:43.219 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:43.220059 | orchestrator | 10:44:43.219 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:43.220068 | orchestrator | 10:44:43.220 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:43.220078 | orchestrator | 10:44:43.220 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:43.220120 | orchestrator | 10:44:43.220 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.220130 | orchestrator | 10:44:43.220 STDOUT terraform:  } 2025-09-19 10:44:43.220139 | orchestrator | 10:44:43.220 STDOUT terraform:  + network { 2025-09-19 10:44:43.220167 | orchestrator | 10:44:43.220 STDOUT terraform:  + access_network = false 2025-09-19 10:44:43.220199 | orchestrator | 10:44:43.220 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:43.224370 | orchestrator | 10:44:43.220 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:43.224406 | orchestrator | 10:44:43.224 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:43.224456 | orchestrator | 10:44:43.224 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:43.224516 | orchestrator | 10:44:43.224 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:43.224527 | orchestrator | 10:44:43.224 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.224560 | orchestrator | 10:44:43.224 STDOUT terraform:  } 2025-09-19 10:44:43.224568 | orchestrator | 10:44:43.224 STDOUT terraform:  } 2025-09-19 10:44:43.224636 | orchestrator | 10:44:43.224 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-19 10:44:43.224676 | orchestrator | 10:44:43.224 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 10:44:43.224718 | orchestrator | 10:44:43.224 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:43.224768 | orchestrator | 10:44:43.224 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:43.224802 | orchestrator | 10:44:43.224 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:43.224846 | orchestrator | 10:44:43.224 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.224874 | orchestrator | 10:44:43.224 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.224882 | orchestrator | 10:44:43.224 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:43.224975 | orchestrator | 10:44:43.224 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:43.225025 | orchestrator | 10:44:43.224 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:43.225081 | orchestrator | 10:44:43.224 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 10:44:43.225108 | orchestrator | 10:44:43.225 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:43.225145 | orchestrator | 10:44:43.225 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:43.225194 | orchestrator | 10:44:43.225 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.225234 | orchestrator | 10:44:43.225 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.225276 | orchestrator | 10:44:43.225 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:43.225302 | orchestrator | 10:44:43.225 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:43.225339 | orchestrator | 10:44:43.225 STDOUT terraform:  + name = "testbed-node-0" 2025-09-19 10:44:43.225365 | orchestrator | 10:44:43.225 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:43.225403 | orchestrator | 10:44:43.225 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.225439 | orchestrator | 10:44:43.225 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:43.225464 | orchestrator | 10:44:43.225 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:43.225500 | orchestrator | 10:44:43.225 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:43.225553 | orchestrator | 10:44:43.225 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 10:44:43.225562 | orchestrator | 10:44:43.225 STDOUT terraform:  + block_device { 2025-09-19 10:44:43.225593 | orchestrator | 10:44:43.225 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:43.225618 | orchestrator | 10:44:43.225 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:43.225643 | orchestrator | 10:44:43.225 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:43.226450 | orchestrator | 10:44:43.225 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:43.226520 | orchestrator | 10:44:43.226 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:43.226586 | orchestrator | 10:44:43.226 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.226594 | orchestrator | 10:44:43.226 STDOUT terraform:  } 2025-09-19 10:44:43.226602 | orchestrator | 10:44:43.226 STDOUT terraform:  + network { 2025-09-19 10:44:43.226627 | orchestrator | 10:44:43.226 STDOUT terraform:  + access_network = false 2025-09-19 10:44:43.226669 | orchestrator | 10:44:43.226 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:43.226702 | orchestrator | 10:44:43.226 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:43.226737 | orchestrator | 10:44:43.226 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:43.226771 | orchestrator | 10:44:43.226 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:43.226811 | orchestrator | 10:44:43.226 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:43.226843 | orchestrator | 10:44:43.226 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.226851 | orchestrator | 10:44:43.226 STDOUT terraform:  } 2025-09-19 10:44:43.226878 | orchestrator | 10:44:43.226 STDOUT terraform:  } 2025-09-19 10:44:43.226920 | orchestrator | 10:44:43.226 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-19 10:44:43.226962 | orchestrator | 10:44:43.226 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 10:44:43.227003 | orchestrator | 10:44:43.226 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:43.227136 | orchestrator | 10:44:43.226 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:43.227172 | orchestrator | 10:44:43.227 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:43.227211 | orchestrator | 10:44:43.227 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.227233 | orchestrator | 10:44:43.227 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.227255 | orchestrator | 10:44:43.227 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:43.227292 | orchestrator | 10:44:43.227 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:43.227329 | orchestrator | 10:44:43.227 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:43.227359 | orchestrator | 10:44:43.227 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 10:44:43.227381 | orchestrator | 10:44:43.227 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:43.227421 | orchestrator | 10:44:43.227 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:43.227465 | orchestrator | 10:44:43.227 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.227501 | orchestrator | 10:44:43.227 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.227538 | orchestrator | 10:44:43.227 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:43.227560 | orchestrator | 10:44:43.227 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:43.227593 | orchestrator | 10:44:43.227 STDOUT terraform:  + name = "testbed-node-1" 2025-09-19 10:44:43.227616 | orchestrator | 10:44:43.227 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:43.227660 | orchestrator | 10:44:43.227 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.227693 | orchestrator | 10:44:43.227 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:43.227716 | orchestrator | 10:44:43.227 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:43.227751 | orchestrator | 10:44:43.227 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:43.227846 | orchestrator | 10:44:43.227 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 10:44:43.227854 | orchestrator | 10:44:43.227 STDOUT terraform:  + block_device { 2025-09-19 10:44:43.227861 | orchestrator | 10:44:43.227 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:43.227903 | orchestrator | 10:44:43.227 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:43.227926 | orchestrator | 10:44:43.227 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:43.227964 | orchestrator | 10:44:43.227 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:43.227994 | orchestrator | 10:44:43.227 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:43.228048 | orchestrator | 10:44:43.227 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.228055 | orchestrator | 10:44:43.228 STDOUT terraform:  } 2025-09-19 10:44:43.228061 | orchestrator | 10:44:43.228 STDOUT terraform:  + network { 2025-09-19 10:44:43.228093 | orchestrator | 10:44:43.228 STDOUT terraform:  + access_network = false 2025-09-19 10:44:43.228133 | orchestrator | 10:44:43.228 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:43.228164 | orchestrator | 10:44:43.228 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:43.228197 | orchestrator | 10:44:43.228 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:43.228230 | orchestrator | 10:44:43.228 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:43.228262 | orchestrator | 10:44:43.228 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:43.228303 | orchestrator | 10:44:43.228 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.228310 | orchestrator | 10:44:43.228 STDOUT terraform:  } 2025-09-19 10:44:43.228317 | orchestrator | 10:44:43.228 STDOUT terraform:  } 2025-09-19 10:44:43.228370 | orchestrator | 10:44:43.228 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-19 10:44:43.228414 | orchestrator | 10:44:43.228 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 10:44:43.228455 | orchestrator | 10:44:43.228 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:43.228492 | orchestrator | 10:44:43.228 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:43.228527 | orchestrator | 10:44:43.228 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:43.228565 | orchestrator | 10:44:43.228 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.228587 | orchestrator | 10:44:43.228 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.228610 | orchestrator | 10:44:43.228 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:43.228649 | orchestrator | 10:44:43.228 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:43.228686 | orchestrator | 10:44:43.228 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:43.228715 | orchestrator | 10:44:43.228 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 10:44:43.228728 | orchestrator | 10:44:43.228 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:43.228773 | orchestrator | 10:44:43.228 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:43.228808 | orchestrator | 10:44:43.228 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.228846 | orchestrator | 10:44:43.228 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.228880 | orchestrator | 10:44:43.228 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:43.228903 | orchestrator | 10:44:43.228 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:43.228936 | orchestrator | 10:44:43.228 STDOUT terraform:  + name = "testbed-node-2" 2025-09-19 10:44:43.228967 | orchestrator | 10:44:43.228 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:43.229020 | orchestrator | 10:44:43.228 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.229054 | orchestrator | 10:44:43.228 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:43.229083 | orchestrator | 10:44:43.229 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:43.229123 | orchestrator | 10:44:43.229 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:43.229172 | orchestrator | 10:44:43.229 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 10:44:43.229180 | orchestrator | 10:44:43.229 STDOUT terraform:  + block_device { 2025-09-19 10:44:43.229213 | orchestrator | 10:44:43.229 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:43.229245 | orchestrator | 10:44:43.229 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:43.229281 | orchestrator | 10:44:43.229 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:43.229304 | orchestrator | 10:44:43.229 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:43.229336 | orchestrator | 10:44:43.229 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:43.229385 | orchestrator | 10:44:43.229 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.229397 | orchestrator | 10:44:43.229 STDOUT terraform:  } 2025-09-19 10:44:43.229403 | orchestrator | 10:44:43.229 STDOUT terraform:  + network { 2025-09-19 10:44:43.229434 | orchestrator | 10:44:43.229 STDOUT terraform:  + access_network = false 2025-09-19 10:44:43.229464 | orchestrator | 10:44:43.229 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:43.229498 | orchestrator | 10:44:43.229 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:43.229529 | orchestrator | 10:44:43.229 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:43.229562 | orchestrator | 10:44:43.229 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:43.229593 | orchestrator | 10:44:43.229 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:43.229627 | orchestrator | 10:44:43.229 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.229633 | orchestrator | 10:44:43.229 STDOUT terraform:  } 2025-09-19 10:44:43.229643 | orchestrator | 10:44:43.229 STDOUT terraform:  } 2025-09-19 10:44:43.229693 | orchestrator | 10:44:43.229 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-19 10:44:43.229735 | orchestrator | 10:44:43.229 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 10:44:43.229779 | orchestrator | 10:44:43.229 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:43.229812 | orchestrator | 10:44:43.229 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:43.229850 | orchestrator | 10:44:43.229 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:43.229885 | orchestrator | 10:44:43.229 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.229907 | orchestrator | 10:44:43.229 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.229930 | orchestrator | 10:44:43.229 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:43.229969 | orchestrator | 10:44:43.229 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:43.230025 | orchestrator | 10:44:43.229 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:43.230077 | orchestrator | 10:44:43.229 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 10:44:43.230108 | orchestrator | 10:44:43.230 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:43.230137 | orchestrator | 10:44:43.230 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:43.232133 | orchestrator | 10:44:43.230 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.232174 | orchestrator | 10:44:43.232 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.232244 | orchestrator | 10:44:43.232 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:43.232274 | orchestrator | 10:44:43.232 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:43.232310 | orchestrator | 10:44:43.232 STDOUT terraform:  + name = "testbed-node-3" 2025-09-19 10:44:43.232336 | orchestrator | 10:44:43.232 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:43.232376 | orchestrator | 10:44:43.232 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.232427 | orchestrator | 10:44:43.232 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:43.233395 | orchestrator | 10:44:43.232 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:43.233911 | orchestrator | 10:44:43.233 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:43.234529 | orchestrator | 10:44:43.233 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 10:44:43.234753 | orchestrator | 10:44:43.234 STDOUT terraform:  + block_device { 2025-09-19 10:44:43.234892 | orchestrator | 10:44:43.234 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:43.235050 | orchestrator | 10:44:43.234 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:43.235411 | orchestrator | 10:44:43.235 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:43.235438 | orchestrator | 10:44:43.235 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:43.235467 | orchestrator | 10:44:43.235 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:43.235505 | orchestrator | 10:44:43.235 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.235512 | orchestrator | 10:44:43.235 STDOUT terraform:  } 2025-09-19 10:44:43.235528 | orchestrator | 10:44:43.235 STDOUT terraform:  + network { 2025-09-19 10:44:43.235548 | orchestrator | 10:44:43.235 STDOUT terraform:  + access_network = false 2025-09-19 10:44:43.235577 | orchestrator | 10:44:43.235 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:43.235609 | orchestrator | 10:44:43.235 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:43.235639 | orchestrator | 10:44:43.235 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:43.235670 | orchestrator | 10:44:43.235 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:43.235701 | orchestrator | 10:44:43.235 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:43.235731 | orchestrator | 10:44:43.235 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.235738 | orchestrator | 10:44:43.235 STDOUT terraform:  } 2025-09-19 10:44:43.235755 | orchestrator | 10:44:43.235 STDOUT terraform:  } 2025-09-19 10:44:43.235797 | orchestrator | 10:44:43.235 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-19 10:44:43.235838 | orchestrator | 10:44:43.235 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 10:44:43.235873 | orchestrator | 10:44:43.235 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:43.235908 | orchestrator | 10:44:43.235 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:43.235941 | orchestrator | 10:44:43.235 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:43.235975 | orchestrator | 10:44:43.235 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.235998 | orchestrator | 10:44:43.235 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.236051 | orchestrator | 10:44:43.235 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:43.236063 | orchestrator | 10:44:43.236 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:43.236086 | orchestrator | 10:44:43.236 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:43.236115 | orchestrator | 10:44:43.236 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 10:44:43.236138 | orchestrator | 10:44:43.236 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:43.236171 | orchestrator | 10:44:43.236 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:43.236206 | orchestrator | 10:44:43.236 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.236239 | orchestrator | 10:44:43.236 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.236275 | orchestrator | 10:44:43.236 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:43.236298 | orchestrator | 10:44:43.236 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:43.236329 | orchestrator | 10:44:43.236 STDOUT terraform:  + name = "testbed-node-4" 2025-09-19 10:44:43.236353 | orchestrator | 10:44:43.236 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:43.236386 | orchestrator | 10:44:43.236 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.236419 | orchestrator | 10:44:43.236 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:43.236442 | orchestrator | 10:44:43.236 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:43.236476 | orchestrator | 10:44:43.236 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:43.236524 | orchestrator | 10:44:43.236 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 10:44:43.236540 | orchestrator | 10:44:43.236 STDOUT terraform:  + block_device { 2025-09-19 10:44:43.236565 | orchestrator | 10:44:43.236 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:43.236592 | orchestrator | 10:44:43.236 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:43.236620 | orchestrator | 10:44:43.236 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:43.236647 | orchestrator | 10:44:43.236 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:43.236678 | orchestrator | 10:44:43.236 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:43.236715 | orchestrator | 10:44:43.236 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.236729 | orchestrator | 10:44:43.236 STDOUT terraform:  } 2025-09-19 10:44:43.236744 | orchestrator | 10:44:43.236 STDOUT terraform:  + network { 2025-09-19 10:44:43.236765 | orchestrator | 10:44:43.236 STDOUT terraform:  + access_network = false 2025-09-19 10:44:43.236794 | orchestrator | 10:44:43.236 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:43.236824 | orchestrator | 10:44:43.236 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:43.236854 | orchestrator | 10:44:43.236 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:43.236884 | orchestrator | 10:44:43.236 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:43.236914 | orchestrator | 10:44:43.236 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:43.236944 | orchestrator | 10:44:43.236 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.236960 | orchestrator | 10:44:43.236 STDOUT terraform:  } 2025-09-19 10:44:43.236966 | orchestrator | 10:44:43.236 STDOUT terraform:  } 2025-09-19 10:44:43.237023 | orchestrator | 10:44:43.236 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-19 10:44:43.237061 | orchestrator | 10:44:43.237 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 10:44:43.237095 | orchestrator | 10:44:43.237 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 10:44:43.237127 | orchestrator | 10:44:43.237 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 10:44:43.237161 | orchestrator | 10:44:43.237 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 10:44:43.237196 | orchestrator | 10:44:43.237 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.237219 | orchestrator | 10:44:43.237 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 10:44:43.237240 | orchestrator | 10:44:43.237 STDOUT terraform:  + config_drive = true 2025-09-19 10:44:43.237274 | orchestrator | 10:44:43.237 STDOUT terraform:  + created = (known after apply) 2025-09-19 10:44:43.237308 | orchestrator | 10:44:43.237 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 10:44:43.237337 | orchestrator | 10:44:43.237 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 10:44:43.237362 | orchestrator | 10:44:43.237 STDOUT terraform:  + force_delete = false 2025-09-19 10:44:43.237395 | orchestrator | 10:44:43.237 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 10:44:43.237429 | orchestrator | 10:44:43.237 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.237462 | orchestrator | 10:44:43.237 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 10:44:43.237497 | orchestrator | 10:44:43.237 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 10:44:43.237522 | orchestrator | 10:44:43.237 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 10:44:43.237551 | orchestrator | 10:44:43.237 STDOUT terraform:  + name = "testbed-node-5" 2025-09-19 10:44:43.237575 | orchestrator | 10:44:43.237 STDOUT terraform:  + power_state = "active" 2025-09-19 10:44:43.237609 | orchestrator | 10:44:43.237 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.237642 | orchestrator | 10:44:43.237 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 10:44:43.237666 | orchestrator | 10:44:43.237 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 10:44:43.237700 | orchestrator | 10:44:43.237 STDOUT terraform:  + updated = (known after apply) 2025-09-19 10:44:43.237747 | orchestrator | 10:44:43.237 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 10:44:43.237763 | orchestrator | 10:44:43.237 STDOUT terraform:  + block_device { 2025-09-19 10:44:43.237787 | orchestrator | 10:44:43.237 STDOUT terraform:  + boot_index = 0 2025-09-19 10:44:43.237815 | orchestrator | 10:44:43.237 STDOUT terraform:  + delete_on_termination = false 2025-09-19 10:44:43.237845 | orchestrator | 10:44:43.237 STDOUT terraform:  + destination_type = "volume" 2025-09-19 10:44:43.237872 | orchestrator | 10:44:43.237 STDOUT terraform:  + multiattach = false 2025-09-19 10:44:43.237901 | orchestrator | 10:44:43.237 STDOUT terraform:  + source_type = "volume" 2025-09-19 10:44:43.237937 | orchestrator | 10:44:43.237 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.237951 | orchestrator | 10:44:43.237 STDOUT terraform:  } 2025-09-19 10:44:43.237965 | orchestrator | 10:44:43.237 STDOUT terraform:  + network { 2025-09-19 10:44:43.237985 | orchestrator | 10:44:43.237 STDOUT terraform:  + access_network = false 2025-09-19 10:44:43.238113 | orchestrator | 10:44:43.237 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 10:44:43.238405 | orchestrator | 10:44:43.238 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 10:44:43.238922 | orchestrator | 10:44:43.238 STDOUT terraform:  + mac = (known after apply) 2025-09-19 10:44:43.239566 | orchestrator | 10:44:43.239 STDOUT terraform:  + name = (known after apply) 2025-09-19 10:44:43.240035 | orchestrator | 10:44:43.239 STDOUT terraform:  + port = (known after apply) 2025-09-19 10:44:43.240471 | orchestrator | 10:44:43.240 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 10:44:43.240521 | orchestrator | 10:44:43.240 STDOUT terraform:  } 2025-09-19 10:44:43.240668 | orchestrator | 10:44:43.240 STDOUT terraform:  } 2025-09-19 10:44:43.240942 | orchestrator | 10:44:43.240 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-19 10:44:43.243300 | orchestrator | 10:44:43.241 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-19 10:44:43.243331 | orchestrator | 10:44:43.241 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-19 10:44:43.243336 | orchestrator | 10:44:43.242 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.243340 | orchestrator | 10:44:43.242 STDOUT terraform:  + name = "testbed" 2025-09-19 10:44:43.243344 | orchestrator | 10:44:43.242 STDOUT terraform:  + private_key = (sensitive value) 2025-09-19 10:44:43.243348 | orchestrator | 10:44:43.242 STDOUT terraform:  + public_key = (known after apply) 2025-09-19 10:44:43.243759 | orchestrator | 10:44:43.243 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.244426 | orchestrator | 10:44:43.243 STDOUT terraform:  + user_id = (known after apply) 2025-09-19 10:44:43.244913 | orchestrator | 10:44:43.244 STDOUT terraform:  } 2025-09-19 10:44:43.245625 | orchestrator | 10:44:43.245 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-19 10:44:43.245696 | orchestrator | 10:44:43.245 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:43.245733 | orchestrator | 10:44:43.245 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:43.245770 | orchestrator | 10:44:43.245 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.245806 | orchestrator | 10:44:43.245 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:43.245842 | orchestrator | 10:44:43.245 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.245877 | orchestrator | 10:44:43.245 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:43.245895 | orchestrator | 10:44:43.245 STDOUT terraform:  } 2025-09-19 10:44:43.245956 | orchestrator | 10:44:43.245 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-19 10:44:43.246044 | orchestrator | 10:44:43.245 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:43.246079 | orchestrator | 10:44:43.246 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:43.246115 | orchestrator | 10:44:43.246 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.246150 | orchestrator | 10:44:43.246 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:43.246187 | orchestrator | 10:44:43.246 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.246221 | orchestrator | 10:44:43.246 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:43.246238 | orchestrator | 10:44:43.246 STDOUT terraform:  } 2025-09-19 10:44:43.246300 | orchestrator | 10:44:43.246 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-19 10:44:43.246361 | orchestrator | 10:44:43.246 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:43.246395 | orchestrator | 10:44:43.246 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:43.246430 | orchestrator | 10:44:43.246 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.246465 | orchestrator | 10:44:43.246 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:43.246499 | orchestrator | 10:44:43.246 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.246533 | orchestrator | 10:44:43.246 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:43.246549 | orchestrator | 10:44:43.246 STDOUT terraform:  } 2025-09-19 10:44:43.246615 | orchestrator | 10:44:43.246 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-19 10:44:43.246674 | orchestrator | 10:44:43.246 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:43.246709 | orchestrator | 10:44:43.246 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:43.246743 | orchestrator | 10:44:43.246 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.246776 | orchestrator | 10:44:43.246 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:43.246811 | orchestrator | 10:44:43.246 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.246846 | orchestrator | 10:44:43.246 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:43.246862 | orchestrator | 10:44:43.246 STDOUT terraform:  } 2025-09-19 10:44:43.246921 | orchestrator | 10:44:43.246 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-19 10:44:43.246980 | orchestrator | 10:44:43.246 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:43.247024 | orchestrator | 10:44:43.246 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:43.247059 | orchestrator | 10:44:43.247 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.247093 | orchestrator | 10:44:43.247 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:43.247128 | orchestrator | 10:44:43.247 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.247162 | orchestrator | 10:44:43.247 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:43.247179 | orchestrator | 10:44:43.247 STDOUT terraform:  } 2025-09-19 10:44:43.247239 | orchestrator | 10:44:43.247 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-19 10:44:43.247298 | orchestrator | 10:44:43.247 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:43.247333 | orchestrator | 10:44:43.247 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:43.247369 | orchestrator | 10:44:43.247 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.247403 | orchestrator | 10:44:43.247 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:43.247436 | orchestrator | 10:44:43.247 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.247471 | orchestrator | 10:44:43.247 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:43.247487 | orchestrator | 10:44:43.247 STDOUT terraform:  } 2025-09-19 10:44:43.247548 | orchestrator | 10:44:43.247 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-19 10:44:43.247607 | orchestrator | 10:44:43.247 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:43.247641 | orchestrator | 10:44:43.247 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:43.247676 | orchestrator | 10:44:43.247 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.247711 | orchestrator | 10:44:43.247 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:43.247745 | orchestrator | 10:44:43.247 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.247779 | orchestrator | 10:44:43.247 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:43.247795 | orchestrator | 10:44:43.247 STDOUT terraform:  } 2025-09-19 10:44:43.247856 | orchestrator | 10:44:43.247 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-19 10:44:43.247916 | orchestrator | 10:44:43.247 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:43.247950 | orchestrator | 10:44:43.247 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:43.247984 | orchestrator | 10:44:43.247 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.248041 | orchestrator | 10:44:43.247 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:43.248077 | orchestrator | 10:44:43.248 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.248111 | orchestrator | 10:44:43.248 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:43.248128 | orchestrator | 10:44:43.248 STDOUT terraform:  } 2025-09-19 10:44:43.248189 | orchestrator | 10:44:43.248 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-19 10:44:43.248247 | orchestrator | 10:44:43.248 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 10:44:43.248284 | orchestrator | 10:44:43.248 STDOUT terraform:  + device = (known after apply) 2025-09-19 10:44:43.248313 | orchestrator | 10:44:43.248 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.248346 | orchestrator | 10:44:43.248 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 10:44:43.248378 | orchestrator | 10:44:43.248 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.248410 | orchestrator | 10:44:43.248 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 10:44:43.248425 | orchestrator | 10:44:43.248 STDOUT terraform:  } 2025-09-19 10:44:43.248493 | orchestrator | 10:44:43.248 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-19 10:44:43.248557 | orchestrator | 10:44:43.248 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-19 10:44:43.248589 | orchestrator | 10:44:43.248 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-19 10:44:43.248621 | orchestrator | 10:44:43.248 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-19 10:44:43.248654 | orchestrator | 10:44:43.248 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.248687 | orchestrator | 10:44:43.248 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 10:44:43.248720 | orchestrator | 10:44:43.248 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.248736 | orchestrator | 10:44:43.248 STDOUT terraform:  } 2025-09-19 10:44:43.248792 | orchestrator | 10:44:43.248 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-19 10:44:43.248846 | orchestrator | 10:44:43.248 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-19 10:44:43.248874 | orchestrator | 10:44:43.248 STDOUT terraform:  + address = (known after apply) 2025-09-19 10:44:43.248904 | orchestrator | 10:44:43.248 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.248932 | orchestrator | 10:44:43.248 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-19 10:44:43.248961 | orchestrator | 10:44:43.248 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:43.248989 | orchestrator | 10:44:43.248 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-19 10:44:43.249036 | orchestrator | 10:44:43.248 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.249043 | orchestrator | 10:44:43.249 STDOUT terraform:  + pool = "public" 2025-09-19 10:44:43.249074 | orchestrator | 10:44:43.249 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 10:44:43.249103 | orchestrator | 10:44:43.249 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.249132 | orchestrator | 10:44:43.249 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:43.249160 | orchestrator | 10:44:43.249 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.249176 | orchestrator | 10:44:43.249 STDOUT terraform:  } 2025-09-19 10:44:43.249227 | orchestrator | 10:44:43.249 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-19 10:44:43.249277 | orchestrator | 10:44:43.249 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-19 10:44:43.249320 | orchestrator | 10:44:43.249 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:43.249363 | orchestrator | 10:44:43.249 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.249391 | orchestrator | 10:44:43.249 STDOUT terraform:  + availability_zone_hints = [ 2025-09-19 10:44:43.249408 | orchestrator | 10:44:43.249 STDOUT terraform:  + "nova", 2025-09-19 10:44:43.249423 | orchestrator | 10:44:43.249 STDOUT terraform:  ] 2025-09-19 10:44:43.249466 | orchestrator | 10:44:43.249 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-19 10:44:43.249508 | orchestrator | 10:44:43.249 STDOUT terraform:  + external = (known after apply) 2025-09-19 10:44:43.249550 | orchestrator | 10:44:43.249 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.249593 | orchestrator | 10:44:43.249 STDOUT terraform:  + mtu = (known after apply) 2025-09-19 10:44:43.249638 | orchestrator | 10:44:43.249 STDOUT terraform:  + name = "net-testbed-management" 2025-09-19 10:44:43.249680 | orchestrator | 10:44:43.249 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:43.249722 | orchestrator | 10:44:43.249 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:43.249769 | orchestrator | 10:44:43.249 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.249813 | orchestrator | 10:44:43.249 STDOUT terraform:  + shared = (known after apply) 2025-09-19 10:44:43.249853 | orchestrator | 10:44:43.249 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.249894 | orchestrator | 10:44:43.249 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-19 10:44:43.249921 | orchestrator | 10:44:43.249 STDOUT terraform:  + segments (known after apply) 2025-09-19 10:44:43.249937 | orchestrator | 10:44:43.249 STDOUT terraform:  } 2025-09-19 10:44:43.249991 | orchestrator | 10:44:43.249 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-19 10:44:43.250432 | orchestrator | 10:44:43.249 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-19 10:44:43.251421 | orchestrator | 10:44:43.250 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:43.252430 | orchestrator | 10:44:43.251 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:43.253591 | orchestrator | 10:44:43.252 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:43.254086 | orchestrator | 10:44:43.253 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.254386 | orchestrator | 10:44:43.254 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:43.254627 | orchestrator | 10:44:43.254 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:43.254813 | orchestrator | 10:44:43.254 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:43.255019 | orchestrator | 10:44:43.254 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:43.255266 | orchestrator | 10:44:43.255 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.255453 | orchestrator | 10:44:43.255 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:43.255639 | orchestrator | 10:44:43.255 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:43.255831 | orchestrator | 10:44:43.255 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:43.256073 | orchestrator | 10:44:43.255 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:43.256249 | orchestrator | 10:44:43.256 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.256471 | orchestrator | 10:44:43.256 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:43.256641 | orchestrator | 10:44:43.256 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.256757 | orchestrator | 10:44:43.256 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.256886 | orchestrator | 10:44:43.256 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:43.256942 | orchestrator | 10:44:43.256 STDOUT terraform:  } 2025-09-19 10:44:43.257032 | orchestrator | 10:44:43.256 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.257192 | orchestrator | 10:44:43.257 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:43.257232 | orchestrator | 10:44:43.257 STDOUT terraform:  } 2025-09-19 10:44:43.257295 | orchestrator | 10:44:43.257 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:43.257331 | orchestrator | 10:44:43.257 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:43.257475 | orchestrator | 10:44:43.257 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-19 10:44:43.257572 | orchestrator | 10:44:43.257 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:43.257624 | orchestrator | 10:44:43.257 STDOUT terraform:  } 2025-09-19 10:44:43.257661 | orchestrator | 10:44:43.257 STDOUT terraform:  } 2025-09-19 10:44:43.258143 | orchestrator | 10:44:43.257 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-19 10:44:43.258358 | orchestrator | 10:44:43.258 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 10:44:43.258583 | orchestrator | 10:44:43.258 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:43.258748 | orchestrator | 10:44:43.258 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:43.258901 | orchestrator | 10:44:43.258 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:43.259046 | orchestrator | 10:44:43.258 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.259082 | orchestrator | 10:44:43.259 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:43.259196 | orchestrator | 10:44:43.259 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:43.259441 | orchestrator | 10:44:43.259 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:43.259641 | orchestrator | 10:44:43.259 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:43.259860 | orchestrator | 10:44:43.259 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.260048 | orchestrator | 10:44:43.259 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:43.260099 | orchestrator | 10:44:43.260 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:43.260314 | orchestrator | 10:44:43.260 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:43.260484 | orchestrator | 10:44:43.260 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:43.260673 | orchestrator | 10:44:43.260 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.260862 | orchestrator | 10:44:43.260 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:43.261191 | orchestrator | 10:44:43.260 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.261272 | orchestrator | 10:44:43.261 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.261291 | orchestrator | 10:44:43.261 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:43.261316 | orchestrator | 10:44:43.261 STDOUT terraform:  } 2025-09-19 10:44:43.261332 | orchestrator | 10:44:43.261 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.261475 | orchestrator | 10:44:43.261 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 10:44:43.261550 | orchestrator | 10:44:43.261 STDOUT terraform:  } 2025-09-19 10:44:43.261670 | orchestrator | 10:44:43.261 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.261804 | orchestrator | 10:44:43.261 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:43.261893 | orchestrator | 10:44:43.261 STDOUT terraform:  } 2025-09-19 10:44:43.261974 | orchestrator | 10:44:43.261 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.262183 | orchestrator | 10:44:43.262 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 10:44:43.262303 | orchestrator | 10:44:43.262 STDOUT terraform:  } 2025-09-19 10:44:43.262467 | orchestrator | 10:44:43.262 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:43.262533 | orchestrator | 10:44:43.262 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:43.262651 | orchestrator | 10:44:43.262 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-19 10:44:43.262830 | orchestrator | 10:44:43.262 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:43.262924 | orchestrator | 10:44:43.262 STDOUT terraform:  } 2025-09-19 10:44:43.263022 | orchestrator | 10:44:43.262 STDOUT terraform:  } 2025-09-19 10:44:43.263282 | orchestrator | 10:44:43.263 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-19 10:44:43.263500 | orchestrator | 10:44:43.263 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 10:44:43.263682 | orchestrator | 10:44:43.263 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:43.263874 | orchestrator | 10:44:43.263 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:43.264050 | orchestrator | 10:44:43.263 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:43.264235 | orchestrator | 10:44:43.264 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.264419 | orchestrator | 10:44:43.264 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:43.264643 | orchestrator | 10:44:43.264 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:43.264824 | orchestrator | 10:44:43.264 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:43.265037 | orchestrator | 10:44:43.264 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:43.265239 | orchestrator | 10:44:43.265 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.265573 | orchestrator | 10:44:43.265 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:43.265815 | orchestrator | 10:44:43.265 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:43.266117 | orchestrator | 10:44:43.265 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:43.266323 | orchestrator | 10:44:43.266 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:43.266549 | orchestrator | 10:44:43.266 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.266739 | orchestrator | 10:44:43.266 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:43.267070 | orchestrator | 10:44:43.266 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.267335 | orchestrator | 10:44:43.267 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.267739 | orchestrator | 10:44:43.267 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:43.267871 | orchestrator | 10:44:43.267 STDOUT terraform:  } 2025-09-19 10:44:43.268057 | orchestrator | 10:44:43.267 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.268229 | orchestrator | 10:44:43.268 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 10:44:43.268312 | orchestrator | 10:44:43.268 STDOUT terraform:  } 2025-09-19 10:44:43.268407 | orchestrator | 10:44:43.268 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.268566 | orchestrator | 10:44:43.268 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:43.268625 | orchestrator | 10:44:43.268 STDOUT terraform:  } 2025-09-19 10:44:43.268724 | orchestrator | 10:44:43.268 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.268901 | orchestrator | 10:44:43.268 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 10:44:43.268985 | orchestrator | 10:44:43.268 STDOUT terraform:  } 2025-09-19 10:44:43.269092 | orchestrator | 10:44:43.268 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:43.269211 | orchestrator | 10:44:43.269 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:43.269333 | orchestrator | 10:44:43.269 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-19 10:44:43.269494 | orchestrator | 10:44:43.269 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:43.269582 | orchestrator | 10:44:43.269 STDOUT terraform:  } 2025-09-19 10:44:43.269646 | orchestrator | 10:44:43.269 STDOUT terraform:  } 2025-09-19 10:44:43.269899 | orchestrator | 10:44:43.269 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-19 10:44:43.270159 | orchestrator | 10:44:43.269 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 10:44:43.270333 | orchestrator | 10:44:43.270 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:43.270554 | orchestrator | 10:44:43.270 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:43.270737 | orchestrator | 10:44:43.270 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:43.270920 | orchestrator | 10:44:43.270 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.271116 | orchestrator | 10:44:43.270 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:43.271318 | orchestrator | 10:44:43.271 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:43.271494 | orchestrator | 10:44:43.271 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:43.271704 | orchestrator | 10:44:43.271 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:43.271882 | orchestrator | 10:44:43.271 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.272086 | orchestrator | 10:44:43.271 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:43.272294 | orchestrator | 10:44:43.272 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:43.272461 | orchestrator | 10:44:43.272 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:43.272665 | orchestrator | 10:44:43.272 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:43.273020 | orchestrator | 10:44:43.272 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.273236 | orchestrator | 10:44:43.273 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:43.273409 | orchestrator | 10:44:43.273 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.273528 | orchestrator | 10:44:43.273 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.273699 | orchestrator | 10:44:43.273 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:43.273760 | orchestrator | 10:44:43.273 STDOUT terraform:  } 2025-09-19 10:44:43.274052 | orchestrator | 10:44:43.273 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.274340 | orchestrator | 10:44:43.274 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 10:44:43.274481 | orchestrator | 10:44:43.274 STDOUT terraform:  } 2025-09-19 10:44:43.274640 | orchestrator | 10:44:43.274 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.274784 | orchestrator | 10:44:43.274 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:43.274862 | orchestrator | 10:44:43.274 STDOUT terraform:  } 2025-09-19 10:44:43.275097 | orchestrator | 10:44:43.274 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.275266 | orchestrator | 10:44:43.275 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 10:44:43.275338 | orchestrator | 10:44:43.275 STDOUT terraform:  } 2025-09-19 10:44:43.275466 | orchestrator | 10:44:43.275 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:43.275586 | orchestrator | 10:44:43.275 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:43.275712 | orchestrator | 10:44:43.275 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-19 10:44:43.275860 | orchestrator | 10:44:43.275 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:43.275953 | orchestrator | 10:44:43.275 STDOUT terraform:  } 2025-09-19 10:44:43.276036 | orchestrator | 10:44:43.275 STDOUT terraform:  } 2025-09-19 10:44:43.276314 | orchestrator | 10:44:43.276 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-19 10:44:43.276663 | orchestrator | 10:44:43.276 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 10:44:43.276868 | orchestrator | 10:44:43.276 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:43.277088 | orchestrator | 10:44:43.276 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:43.277292 | orchestrator | 10:44:43.277 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:43.277528 | orchestrator | 10:44:43.277 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.277783 | orchestrator | 10:44:43.277 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:43.278032 | orchestrator | 10:44:43.277 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:43.278233 | orchestrator | 10:44:43.278 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:43.278411 | orchestrator | 10:44:43.278 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:43.278642 | orchestrator | 10:44:43.278 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.278845 | orchestrator | 10:44:43.278 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:43.279081 | orchestrator | 10:44:43.278 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:43.279258 | orchestrator | 10:44:43.279 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:43.279424 | orchestrator | 10:44:43.279 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:43.279688 | orchestrator | 10:44:43.279 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.279951 | orchestrator | 10:44:43.279 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:43.280137 | orchestrator | 10:44:43.279 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.280347 | orchestrator | 10:44:43.280 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.280586 | orchestrator | 10:44:43.280 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:43.280669 | orchestrator | 10:44:43.280 STDOUT terraform:  } 2025-09-19 10:44:43.280779 | orchestrator | 10:44:43.280 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.280915 | orchestrator | 10:44:43.280 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 10:44:43.281109 | orchestrator | 10:44:43.280 STDOUT terraform:  } 2025-09-19 10:44:43.281149 | orchestrator | 10:44:43.281 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.281333 | orchestrator | 10:44:43.281 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:43.281429 | orchestrator | 10:44:43.281 STDOUT terraform:  } 2025-09-19 10:44:43.281580 | orchestrator | 10:44:43.281 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.281734 | orchestrator | 10:44:43.281 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 10:44:43.281890 | orchestrator | 10:44:43.281 STDOUT terraform:  } 2025-09-19 10:44:43.282061 | orchestrator | 10:44:43.281 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:43.282141 | orchestrator | 10:44:43.282 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:43.282353 | orchestrator | 10:44:43.282 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-19 10:44:43.282558 | orchestrator | 10:44:43.282 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:43.282593 | orchestrator | 10:44:43.282 STDOUT terraform:  } 2025-09-19 10:44:43.282669 | orchestrator | 10:44:43.282 STDOUT terraform:  } 2025-09-19 10:44:43.283214 | orchestrator | 10:44:43.282 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-19 10:44:43.283692 | orchestrator | 10:44:43.283 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 10:44:43.283899 | orchestrator | 10:44:43.283 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:43.284159 | orchestrator | 10:44:43.283 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:43.284570 | orchestrator | 10:44:43.284 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:43.285021 | orchestrator | 10:44:43.284 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.285653 | orchestrator | 10:44:43.285 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:43.286169 | orchestrator | 10:44:43.285 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:43.286627 | orchestrator | 10:44:43.286 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:43.286907 | orchestrator | 10:44:43.286 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:43.287195 | orchestrator | 10:44:43.286 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.287467 | orchestrator | 10:44:43.287 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:43.287809 | orchestrator | 10:44:43.287 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:43.288152 | orchestrator | 10:44:43.287 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:43.288441 | orchestrator | 10:44:43.288 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:43.288763 | orchestrator | 10:44:43.288 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.289073 | orchestrator | 10:44:43.288 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:43.289320 | orchestrator | 10:44:43.289 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.289485 | orchestrator | 10:44:43.289 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.289732 | orchestrator | 10:44:43.289 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:43.289862 | orchestrator | 10:44:43.289 STDOUT terraform:  } 2025-09-19 10:44:43.290049 | orchestrator | 10:44:43.289 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.290340 | orchestrator | 10:44:43.290 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 10:44:43.290498 | orchestrator | 10:44:43.290 STDOUT terraform:  } 2025-09-19 10:44:43.290667 | orchestrator | 10:44:43.290 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.290961 | orchestrator | 10:44:43.290 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:43.291138 | orchestrator | 10:44:43.290 STDOUT terraform:  } 2025-09-19 10:44:43.291274 | orchestrator | 10:44:43.291 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.291442 | orchestrator | 10:44:43.291 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 10:44:43.291549 | orchestrator | 10:44:43.291 STDOUT terraform:  } 2025-09-19 10:44:43.291696 | orchestrator | 10:44:43.291 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:43.291757 | orchestrator | 10:44:43.291 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:43.291888 | orchestrator | 10:44:43.291 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-19 10:44:43.292093 | orchestrator | 10:44:43.291 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:43.292153 | orchestrator | 10:44:43.292 STDOUT terraform:  } 2025-09-19 10:44:43.292244 | orchestrator | 10:44:43.292 STDOUT terraform:  } 2025-09-19 10:44:43.292484 | orchestrator | 10:44:43.292 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-19 10:44:43.292713 | orchestrator | 10:44:43.292 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 10:44:43.292925 | orchestrator | 10:44:43.292 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:43.293312 | orchestrator | 10:44:43.292 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 10:44:43.293502 | orchestrator | 10:44:43.293 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 10:44:43.293702 | orchestrator | 10:44:43.293 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.293874 | orchestrator | 10:44:43.293 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 10:44:43.298102 | orchestrator | 10:44:43.293 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 10:44:43.298142 | orchestrator | 10:44:43.298 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 10:44:43.298160 | orchestrator | 10:44:43.298 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 10:44:43.298204 | orchestrator | 10:44:43.298 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.298239 | orchestrator | 10:44:43.298 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 10:44:43.298275 | orchestrator | 10:44:43.298 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:43.298325 | orchestrator | 10:44:43.298 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 10:44:43.298349 | orchestrator | 10:44:43.298 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 10:44:43.298384 | orchestrator | 10:44:43.298 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.298419 | orchestrator | 10:44:43.298 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 10:44:43.298454 | orchestrator | 10:44:43.298 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.298477 | orchestrator | 10:44:43.298 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.298502 | orchestrator | 10:44:43.298 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 10:44:43.298511 | orchestrator | 10:44:43.298 STDOUT terraform:  } 2025-09-19 10:44:43.298548 | orchestrator | 10:44:43.298 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.298558 | orchestrator | 10:44:43.298 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 10:44:43.298565 | orchestrator | 10:44:43.298 STDOUT terraform:  } 2025-09-19 10:44:43.298586 | orchestrator | 10:44:43.298 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.298613 | orchestrator | 10:44:43.298 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 10:44:43.298622 | orchestrator | 10:44:43.298 STDOUT terraform:  } 2025-09-19 10:44:43.298642 | orchestrator | 10:44:43.298 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 10:44:43.298670 | orchestrator | 10:44:43.298 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 10:44:43.298679 | orchestrator | 10:44:43.298 STDOUT terraform:  } 2025-09-19 10:44:43.298702 | orchestrator | 10:44:43.298 STDOUT terraform:  + binding (known after apply) 2025-09-19 10:44:43.298712 | orchestrator | 10:44:43.298 STDOUT terraform:  + fixed_ip { 2025-09-19 10:44:43.298739 | orchestrator | 10:44:43.298 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-19 10:44:43.298768 | orchestrator | 10:44:43.298 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:43.298777 | orchestrator | 10:44:43.298 STDOUT terraform:  } 2025-09-19 10:44:43.298785 | orchestrator | 10:44:43.298 STDOUT terraform:  } 2025-09-19 10:44:43.298834 | orchestrator | 10:44:43.298 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-19 10:44:43.298881 | orchestrator | 10:44:43.298 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-19 10:44:43.298891 | orchestrator | 10:44:43.298 STDOUT terraform:  + force_destroy = false 2025-09-19 10:44:43.298933 | orchestrator | 10:44:43.298 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.298948 | orchestrator | 10:44:43.298 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 10:44:43.298978 | orchestrator | 10:44:43.298 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.299018 | orchestrator | 10:44:43.298 STDOUT terraform:  + router_id = (known after apply) 2025-09-19 10:44:43.299059 | orchestrator | 10:44:43.299 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 10:44:43.299069 | orchestrator | 10:44:43.299 STDOUT terraform:  } 2025-09-19 10:44:43.299104 | orchestrator | 10:44:43.299 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-19 10:44:43.299140 | orchestrator | 10:44:43.299 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-19 10:44:43.299176 | orchestrator | 10:44:43.299 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 10:44:43.299212 | orchestrator | 10:44:43.299 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.299236 | orchestrator | 10:44:43.299 STDOUT terraform:  + availability_zone_hints = [ 2025-09-19 10:44:43.299249 | orchestrator | 10:44:43.299 STDOUT terraform:  + "nova", 2025-09-19 10:44:43.299257 | orchestrator | 10:44:43.299 STDOUT terraform:  ] 2025-09-19 10:44:43.299294 | orchestrator | 10:44:43.299 STDOUT terraform:  + distributed = (known after apply) 2025-09-19 10:44:43.299331 | orchestrator | 10:44:43.299 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-19 10:44:43.299379 | orchestrator | 10:44:43.299 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-19 10:44:43.299423 | orchestrator | 10:44:43.299 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-19 10:44:43.299448 | orchestrator | 10:44:43.299 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.299478 | orchestrator | 10:44:43.299 STDOUT terraform:  + name = "testbed" 2025-09-19 10:44:43.299514 | orchestrator | 10:44:43.299 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.299551 | orchestrator | 10:44:43.299 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.299580 | orchestrator | 10:44:43.299 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-19 10:44:43.299589 | orchestrator | 10:44:43.299 STDOUT terraform:  } 2025-09-19 10:44:43.299657 | orchestrator | 10:44:43.299 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-19 10:44:43.299712 | orchestrator | 10:44:43.299 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-19 10:44:43.299738 | orchestrator | 10:44:43.299 STDOUT terraform:  + description = "ssh" 2025-09-19 10:44:43.299766 | orchestrator | 10:44:43.299 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:43.299791 | orchestrator | 10:44:43.299 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:43.299827 | orchestrator | 10:44:43.299 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.299851 | orchestrator | 10:44:43.299 STDOUT terraform:  + port_range_max = 22 2025-09-19 10:44:43.299887 | orchestrator | 10:44:43.299 STDOUT terraform:  + port_range_min = 22 2025-09-19 10:44:43.299913 | orchestrator | 10:44:43.299 STDOUT terraform:  + protocol = "tcp" 2025-09-19 10:44:43.299950 | orchestrator | 10:44:43.299 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.299986 | orchestrator | 10:44:43.299 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:43.300032 | orchestrator | 10:44:43.299 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:43.300062 | orchestrator | 10:44:43.300 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:43.300099 | orchestrator | 10:44:43.300 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:43.300134 | orchestrator | 10:44:43.300 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.300143 | orchestrator | 10:44:43.300 STDOUT terraform:  } 2025-09-19 10:44:43.300195 | orchestrator | 10:44:43.300 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-19 10:44:43.300247 | orchestrator | 10:44:43.300 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-19 10:44:43.300277 | orchestrator | 10:44:43.300 STDOUT terraform:  + description = "wireguard" 2025-09-19 10:44:43.300306 | orchestrator | 10:44:43.300 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:43.300331 | orchestrator | 10:44:43.300 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:43.300367 | orchestrator | 10:44:43.300 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.300392 | orchestrator | 10:44:43.300 STDOUT terraform:  + port_range_max = 51820 2025-09-19 10:44:43.300415 | orchestrator | 10:44:43.300 STDOUT terraform:  + port_range_min = 51820 2025-09-19 10:44:43.300442 | orchestrator | 10:44:43.300 STDOUT terraform:  + protocol = "udp" 2025-09-19 10:44:43.300478 | orchestrator | 10:44:43.300 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.300512 | orchestrator | 10:44:43.300 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:43.300547 | orchestrator | 10:44:43.300 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:43.300576 | orchestrator | 10:44:43.300 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:43.300615 | orchestrator | 10:44:43.300 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:43.300649 | orchestrator | 10:44:43.300 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.300658 | orchestrator | 10:44:43.300 STDOUT terraform:  } 2025-09-19 10:44:43.300709 | orchestrator | 10:44:43.300 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-19 10:44:43.300761 | orchestrator | 10:44:43.300 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-19 10:44:43.300789 | orchestrator | 10:44:43.300 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:43.300814 | orchestrator | 10:44:43.300 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:43.300850 | orchestrator | 10:44:43.300 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.300875 | orchestrator | 10:44:43.300 STDOUT terraform:  + protocol = "tcp" 2025-09-19 10:44:43.300912 | orchestrator | 10:44:43.300 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.300945 | orchestrator | 10:44:43.300 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:43.300981 | orchestrator | 10:44:43.300 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:43.301040 | orchestrator | 10:44:43.300 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-19 10:44:43.301051 | orchestrator | 10:44:43.301 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:43.301086 | orchestrator | 10:44:43.301 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.301095 | orchestrator | 10:44:43.301 STDOUT terraform:  } 2025-09-19 10:44:43.301146 | orchestrator | 10:44:43.301 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-19 10:44:43.301197 | orchestrator | 10:44:43.301 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-19 10:44:43.301225 | orchestrator | 10:44:43.301 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:43.301249 | orchestrator | 10:44:43.301 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:43.301284 | orchestrator | 10:44:43.301 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.301310 | orchestrator | 10:44:43.301 STDOUT terraform:  + protocol = "udp" 2025-09-19 10:44:43.301344 | orchestrator | 10:44:43.301 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.301379 | orchestrator | 10:44:43.301 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:43.301413 | orchestrator | 10:44:43.301 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:43.301448 | orchestrator | 10:44:43.301 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-19 10:44:43.301483 | orchestrator | 10:44:43.301 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:43.301519 | orchestrator | 10:44:43.301 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.301527 | orchestrator | 10:44:43.301 STDOUT terraform:  } 2025-09-19 10:44:43.301581 | orchestrator | 10:44:43.301 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-19 10:44:43.301633 | orchestrator | 10:44:43.301 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-19 10:44:43.301663 | orchestrator | 10:44:43.301 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:43.301677 | orchestrator | 10:44:43.301 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:43.301720 | orchestrator | 10:44:43.301 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.301746 | orchestrator | 10:44:43.301 STDOUT terraform:  + protocol = "icmp" 2025-09-19 10:44:43.301785 | orchestrator | 10:44:43.301 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.301813 | orchestrator | 10:44:43.301 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:43.301842 | orchestrator | 10:44:43.301 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:43.301871 | orchestrator | 10:44:43.301 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:43.301905 | orchestrator | 10:44:43.301 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:43.301941 | orchestrator | 10:44:43.301 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.301949 | orchestrator | 10:44:43.301 STDOUT terraform:  } 2025-09-19 10:44:43.301999 | orchestrator | 10:44:43.301 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-19 10:44:43.302385 | orchestrator | 10:44:43.301 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-19 10:44:43.302705 | orchestrator | 10:44:43.302 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:43.303023 | orchestrator | 10:44:43.302 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:43.303321 | orchestrator | 10:44:43.303 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.303552 | orchestrator | 10:44:43.303 STDOUT terraform:  + protocol = "tcp" 2025-09-19 10:44:43.303946 | orchestrator | 10:44:43.303 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.304338 | orchestrator | 10:44:43.303 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:43.304694 | orchestrator | 10:44:43.304 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:43.304975 | orchestrator | 10:44:43.304 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:43.305480 | orchestrator | 10:44:43.304 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:43.305907 | orchestrator | 10:44:43.305 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.306097 | orchestrator | 10:44:43.305 STDOUT terraform:  } 2025-09-19 10:44:43.310147 | orchestrator | 10:44:43.306 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-19 10:44:43.310187 | orchestrator | 10:44:43.306 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-19 10:44:43.310194 | orchestrator | 10:44:43.306 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:43.310200 | orchestrator | 10:44:43.307 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:43.310206 | orchestrator | 10:44:43.307 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.310212 | orchestrator | 10:44:43.307 STDOUT terraform:  + protocol = "udp" 2025-09-19 10:44:43.310217 | orchestrator | 10:44:43.307 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.310222 | orchestrator | 10:44:43.307 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:43.310228 | orchestrator | 10:44:43.308 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:43.310244 | orchestrator | 10:44:43.308 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:43.310250 | orchestrator | 10:44:43.308 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:43.310255 | orchestrator | 10:44:43.309 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.310260 | orchestrator | 10:44:43.309 STDOUT terraform:  } 2025-09-19 10:44:43.310597 | orchestrator | 10:44:43.309 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-19 10:44:43.311148 | orchestrator | 10:44:43.310 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-19 10:44:43.311461 | orchestrator | 10:44:43.311 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:43.311943 | orchestrator | 10:44:43.311 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:43.312197 | orchestrator | 10:44:43.311 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.312506 | orchestrator | 10:44:43.312 STDOUT terraform:  + protocol = "icmp" 2025-09-19 10:44:43.312897 | orchestrator | 10:44:43.312 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.313369 | orchestrator | 10:44:43.312 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:43.313455 | orchestrator | 10:44:43.313 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:43.313505 | orchestrator | 10:44:43.313 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:43.313557 | orchestrator | 10:44:43.313 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:43.313614 | orchestrator | 10:44:43.313 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.313620 | orchestrator | 10:44:43.313 STDOUT terraform:  } 2025-09-19 10:44:43.313689 | orchestrator | 10:44:43.313 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-19 10:44:43.313743 | orchestrator | 10:44:43.313 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-19 10:44:43.313781 | orchestrator | 10:44:43.313 STDOUT terraform:  + description = "vrrp" 2025-09-19 10:44:43.313805 | orchestrator | 10:44:43.313 STDOUT terraform:  + direction = "ingress" 2025-09-19 10:44:43.313836 | orchestrator | 10:44:43.313 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 10:44:43.313909 | orchestrator | 10:44:43.313 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.313918 | orchestrator | 10:44:43.313 STDOUT terraform:  + protocol = "112" 2025-09-19 10:44:43.313955 | orchestrator | 10:44:43.313 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.313990 | orchestrator | 10:44:43.313 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 10:44:43.314070 | orchestrator | 10:44:43.313 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 10:44:43.314106 | orchestrator | 10:44:43.314 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 10:44:43.314149 | orchestrator | 10:44:43.314 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 10:44:43.314193 | orchestrator | 10:44:43.314 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.314209 | orchestrator | 10:44:43.314 STDOUT terraform:  } 2025-09-19 10:44:43.314276 | orchestrator | 10:44:43.314 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-19 10:44:43.314325 | orchestrator | 10:44:43.314 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-19 10:44:43.314360 | orchestrator | 10:44:43.314 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.314399 | orchestrator | 10:44:43.314 STDOUT terraform:  + description = "management security group" 2025-09-19 10:44:43.314433 | orchestrator | 10:44:43.314 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.314467 | orchestrator | 10:44:43.314 STDOUT terraform:  + name = "testbed-management" 2025-09-19 10:44:43.314509 | orchestrator | 10:44:43.314 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.314532 | orchestrator | 10:44:43.314 STDOUT terraform:  + stateful = (known after apply) 2025-09-19 10:44:43.314565 | orchestrator | 10:44:43.314 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.314583 | orchestrator | 10:44:43.314 STDOUT terraform:  } 2025-09-19 10:44:43.314639 | orchestrator | 10:44:43.314 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-19 10:44:43.314693 | orchestrator | 10:44:43.314 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-19 10:44:43.314728 | orchestrator | 10:44:43.314 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.314758 | orchestrator | 10:44:43.314 STDOUT terraform:  + description = "node security group" 2025-09-19 10:44:43.314791 | orchestrator | 10:44:43.314 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.314821 | orchestrator | 10:44:43.314 STDOUT terraform:  + name = "testbed-node" 2025-09-19 10:44:43.314853 | orchestrator | 10:44:43.314 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.314896 | orchestrator | 10:44:43.314 STDOUT terraform:  + stateful = (known after apply) 2025-09-19 10:44:43.314919 | orchestrator | 10:44:43.314 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.314935 | orchestrator | 10:44:43.314 STDOUT terraform:  } 2025-09-19 10:44:43.314987 | orchestrator | 10:44:43.314 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-19 10:44:43.315050 | orchestrator | 10:44:43.314 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-19 10:44:43.315085 | orchestrator | 10:44:43.315 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 10:44:43.315120 | orchestrator | 10:44:43.315 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-19 10:44:43.315144 | orchestrator | 10:44:43.315 STDOUT terraform:  + dns_nameservers = [ 2025-09-19 10:44:43.315164 | orchestrator | 10:44:43.315 STDOUT terraform:  + "8.8.8.8", 2025-09-19 10:44:43.315183 | orchestrator | 10:44:43.315 STDOUT terraform:  + "9.9.9.9", 2025-09-19 10:44:43.315194 | orchestrator | 10:44:43.315 STDOUT terraform:  ] 2025-09-19 10:44:43.315218 | orchestrator | 10:44:43.315 STDOUT terraform:  + enable_dhcp = true 2025-09-19 10:44:43.315254 | orchestrator | 10:44:43.315 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-19 10:44:43.315292 | orchestrator | 10:44:43.315 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.315315 | orchestrator | 10:44:43.315 STDOUT terraform:  + ip_version = 4 2025-09-19 10:44:43.315350 | orchestrator | 10:44:43.315 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-19 10:44:43.315386 | orchestrator | 10:44:43.315 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-19 10:44:43.315429 | orchestrator | 10:44:43.315 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-19 10:44:43.315463 | orchestrator | 10:44:43.315 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 10:44:43.315489 | orchestrator | 10:44:43.315 STDOUT terraform:  + no_gateway = false 2025-09-19 10:44:43.315536 | orchestrator | 10:44:43.315 STDOUT terraform:  + region = (known after apply) 2025-09-19 10:44:43.315574 | orchestrator | 10:44:43.315 STDOUT terraform:  + service_types = (known after apply) 2025-09-19 10:44:43.315613 | orchestrator | 10:44:43.315 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 10:44:43.315632 | orchestrator | 10:44:43.315 STDOUT terraform:  + allocation_pool { 2025-09-19 10:44:43.315660 | orchestrator | 10:44:43.315 STDOUT terraform:  + end = "192.168.31.250" 2025-09-19 10:44:43.315688 | orchestrator | 10:44:43.315 STDOUT terraform:  + start = "192.168.31.200" 2025-09-19 10:44:43.315704 | orchestrator | 10:44:43.315 STDOUT terraform:  } 2025-09-19 10:44:43.315719 | orchestrator | 10:44:43.315 STDOUT terraform:  } 2025-09-19 10:44:43.315749 | orchestrator | 10:44:43.315 STDOUT terraform:  # terraform_data.image will be created 2025-09-19 10:44:43.315777 | orchestrator | 10:44:43.315 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-19 10:44:43.315806 | orchestrator | 10:44:43.315 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.315830 | orchestrator | 10:44:43.315 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-19 10:44:43.315858 | orchestrator | 10:44:43.315 STDOUT terraform:  + output = (known after apply) 2025-09-19 10:44:43.315873 | orchestrator | 10:44:43.315 STDOUT terraform:  } 2025-09-19 10:44:43.315907 | orchestrator | 10:44:43.315 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-19 10:44:43.315943 | orchestrator | 10:44:43.315 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-19 10:44:43.315968 | orchestrator | 10:44:43.315 STDOUT terraform:  + id = (known after apply) 2025-09-19 10:44:43.315991 | orchestrator | 10:44:43.315 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-19 10:44:43.316035 | orchestrator | 10:44:43.315 STDOUT terraform:  + output = (known after apply) 2025-09-19 10:44:43.316046 | orchestrator | 10:44:43.316 STDOUT terraform:  } 2025-09-19 10:44:43.316082 | orchestrator | 10:44:43.316 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-19 10:44:43.316095 | orchestrator | 10:44:43.316 STDOUT terraform: Changes to Outputs: 2025-09-19 10:44:43.316123 | orchestrator | 10:44:43.316 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-19 10:44:43.316154 | orchestrator | 10:44:43.316 STDOUT terraform:  + private_key = (sensitive value) 2025-09-19 10:44:43.417390 | orchestrator | 10:44:43.415 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-19 10:44:43.417460 | orchestrator | 10:44:43.416 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=56bbd8d7-9ac1-0cf1-61d4-39d95c1296f2] 2025-09-19 10:44:43.417470 | orchestrator | 10:44:43.416 STDOUT terraform: terraform_data.image: Creating... 2025-09-19 10:44:43.419522 | orchestrator | 10:44:43.419 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=4ba30708-04e3-a892-e5b6-f16fd8bd5486] 2025-09-19 10:44:43.431110 | orchestrator | 10:44:43.430 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-19 10:44:43.433880 | orchestrator | 10:44:43.433 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-19 10:44:43.443000 | orchestrator | 10:44:43.442 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-19 10:44:43.444167 | orchestrator | 10:44:43.444 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-19 10:44:43.446627 | orchestrator | 10:44:43.446 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-19 10:44:43.454786 | orchestrator | 10:44:43.454 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-19 10:44:43.455272 | orchestrator | 10:44:43.455 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-19 10:44:43.455453 | orchestrator | 10:44:43.455 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-19 10:44:43.455671 | orchestrator | 10:44:43.455 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-19 10:44:43.457644 | orchestrator | 10:44:43.457 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-19 10:44:43.870592 | orchestrator | 10:44:43.870 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-19 10:44:43.879865 | orchestrator | 10:44:43.879 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-19 10:44:43.885845 | orchestrator | 10:44:43.885 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-19 10:44:43.889749 | orchestrator | 10:44:43.889 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-19 10:44:43.954579 | orchestrator | 10:44:43.953 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-19 10:44:43.963727 | orchestrator | 10:44:43.963 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-19 10:44:44.443658 | orchestrator | 10:44:44.443 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=d7bc67b0-b79e-43ce-b23f-9922bae065d8] 2025-09-19 10:44:44.455382 | orchestrator | 10:44:44.455 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-19 10:44:47.040356 | orchestrator | 10:44:47.040 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c] 2025-09-19 10:44:47.046943 | orchestrator | 10:44:47.046 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-19 10:44:47.066316 | orchestrator | 10:44:47.066 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=729b54dd-f4c1-4a98-9e39-7aa2dbdf058c] 2025-09-19 10:44:47.071068 | orchestrator | 10:44:47.070 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-19 10:44:47.073079 | orchestrator | 10:44:47.072 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=2859ea6e-5cf3-4595-8353-f67711d21d4e] 2025-09-19 10:44:47.078713 | orchestrator | 10:44:47.078 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-19 10:44:47.093132 | orchestrator | 10:44:47.092 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=82c12b62-ffbd-484b-a107-b043e35ec15c] 2025-09-19 10:44:47.094221 | orchestrator | 10:44:47.094 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=ff354216-c1d2-4110-b9e3-f4cf06b21a62] 2025-09-19 10:44:47.101951 | orchestrator | 10:44:47.101 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-19 10:44:47.102447 | orchestrator | 10:44:47.102 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-19 10:44:47.117712 | orchestrator | 10:44:47.117 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=4ab3eba9-7f04-4545-b862-1d19a7d78b14] 2025-09-19 10:44:47.123368 | orchestrator | 10:44:47.123 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-19 10:44:47.129478 | orchestrator | 10:44:47.129 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=23c8bdec-2f7a-480a-98d1-592cee3b582b] 2025-09-19 10:44:47.147860 | orchestrator | 10:44:47.147 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-19 10:44:47.149567 | orchestrator | 10:44:47.149 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=a7da52da-8ff9-443f-9c01-2997209c642a] 2025-09-19 10:44:47.154724 | orchestrator | 10:44:47.154 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=bb0a6bd2741bb9fd6324d3c30b0e09962b9c5ccd] 2025-09-19 10:44:47.163392 | orchestrator | 10:44:47.163 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-19 10:44:47.168475 | orchestrator | 10:44:47.168 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-19 10:44:47.171249 | orchestrator | 10:44:47.171 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=0f9bbc9b5d1f3339df91bbbf4f1240b0c3026de6] 2025-09-19 10:44:47.173911 | orchestrator | 10:44:47.173 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=2d05b72c-4493-4412-ad25-c0b6cbf3de12] 2025-09-19 10:44:47.794817 | orchestrator | 10:44:47.794 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=ba575dcd-e52c-4614-affd-7d36970897ce] 2025-09-19 10:44:48.598831 | orchestrator | 10:44:48.598 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=4609d79a-3a0e-4a57-9951-a0e1e237c76c] 2025-09-19 10:44:48.607451 | orchestrator | 10:44:48.607 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-19 10:44:50.395835 | orchestrator | 10:44:50.395 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=20296197-9eb2-417a-a415-95b3bd769f62] 2025-09-19 10:44:50.458830 | orchestrator | 10:44:50.458 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=482f8994-f50e-4592-b361-7a4b29e22e2d] 2025-09-19 10:44:50.468340 | orchestrator | 10:44:50.468 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=0705e7c4-71e7-4335-94ae-66aba7e7deb2] 2025-09-19 10:44:50.478504 | orchestrator | 10:44:50.478 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb] 2025-09-19 10:44:50.483441 | orchestrator | 10:44:50.483 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=ee875cf9-0ab9-455c-b6ff-02f5d369ce10] 2025-09-19 10:44:50.516193 | orchestrator | 10:44:50.515 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=9a0a635f-bf96-4dca-b2b2-6665bc57a759] 2025-09-19 10:44:51.515170 | orchestrator | 10:44:51.514 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=1e3e3975-80f1-4b73-bc8e-6588a13bea55] 2025-09-19 10:44:51.523994 | orchestrator | 10:44:51.523 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-19 10:44:51.525051 | orchestrator | 10:44:51.524 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-19 10:44:51.526805 | orchestrator | 10:44:51.526 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-19 10:44:51.778820 | orchestrator | 10:44:51.778 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=ad172c38-2e5a-4b92-8cd5-182f61df2db9] 2025-09-19 10:44:51.789297 | orchestrator | 10:44:51.789 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-19 10:44:51.789775 | orchestrator | 10:44:51.789 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-19 10:44:51.791070 | orchestrator | 10:44:51.790 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-19 10:44:51.792260 | orchestrator | 10:44:51.792 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-19 10:44:51.797371 | orchestrator | 10:44:51.797 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-19 10:44:51.802447 | orchestrator | 10:44:51.802 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-19 10:44:51.807212 | orchestrator | 10:44:51.806 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-19 10:44:51.808617 | orchestrator | 10:44:51.808 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-19 10:44:52.010336 | orchestrator | 10:44:52.009 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=2ce4eafc-b595-44df-82ac-b41b1878507d] 2025-09-19 10:44:52.029468 | orchestrator | 10:44:52.029 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-19 10:44:52.199955 | orchestrator | 10:44:52.199 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=c8d0ceb5-9ea8-4590-b996-a49b3d540c99] 2025-09-19 10:44:52.213884 | orchestrator | 10:44:52.213 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-19 10:44:52.217100 | orchestrator | 10:44:52.216 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=73385826-f29a-4f16-841b-26c3903ff56f] 2025-09-19 10:44:52.223245 | orchestrator | 10:44:52.223 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-19 10:44:52.411260 | orchestrator | 10:44:52.410 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=5cb9a5d8-7b05-40ff-ac59-82291f14e777] 2025-09-19 10:44:52.423986 | orchestrator | 10:44:52.423 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-19 10:44:52.434731 | orchestrator | 10:44:52.434 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 0s [id=4e4786d2-8578-4737-8174-8097613a5382] 2025-09-19 10:44:52.440992 | orchestrator | 10:44:52.440 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-19 10:44:52.485552 | orchestrator | 10:44:52.485 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=2dd4c1bf-a6a2-4b82-8974-7d14e14fbf17] 2025-09-19 10:44:52.490672 | orchestrator | 10:44:52.490 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-19 10:44:52.509642 | orchestrator | 10:44:52.509 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=48a17fcc-9ee9-47cb-8c65-eca54642c0a2] 2025-09-19 10:44:52.514632 | orchestrator | 10:44:52.514 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-19 10:44:52.557262 | orchestrator | 10:44:52.556 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=c950ba4a-7941-4085-ad27-1cc4cd27435a] 2025-09-19 10:44:52.562537 | orchestrator | 10:44:52.562 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-19 10:44:52.644366 | orchestrator | 10:44:52.644 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=16ab2ef4-0b93-48fc-8d5a-0e978b60270c] 2025-09-19 10:44:52.679407 | orchestrator | 10:44:52.679 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=a77445a3-8c8d-4cb8-bc9b-df2b0e924118] 2025-09-19 10:44:52.736874 | orchestrator | 10:44:52.736 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=2844f12d-9df8-4d63-bd18-c65423acdd5e] 2025-09-19 10:44:52.810001 | orchestrator | 10:44:52.809 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=01efda73-cd4e-49ee-a89a-00c444b6a41a] 2025-09-19 10:44:53.108207 | orchestrator | 10:44:53.107 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=0816f492-d588-41f1-83e2-52dd736f7cca] 2025-09-19 10:44:53.151319 | orchestrator | 10:44:53.150 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=620ed29f-5ab0-48d2-aaa8-567386b78b69] 2025-09-19 10:44:53.528812 | orchestrator | 10:44:53.528 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 2s [id=efe6dde7-c375-4a31-b3e3-3f638c0ba0db] 2025-09-19 10:44:53.955157 | orchestrator | 10:44:53.954 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=a881a277-a5ea-456d-9e6c-1f23dc8cded7] 2025-09-19 10:44:54.137943 | orchestrator | 10:44:54.111 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=590968b0-f8dd-413e-a625-d3b2c31b180e] 2025-09-19 10:44:54.939815 | orchestrator | 10:44:54.939 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=112d65ba-e3a8-4adc-ac71-3afd3baa0efa] 2025-09-19 10:44:54.964616 | orchestrator | 10:44:54.962 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-19 10:44:54.978472 | orchestrator | 10:44:54.978 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-19 10:44:54.987059 | orchestrator | 10:44:54.986 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-19 10:44:54.987431 | orchestrator | 10:44:54.987 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-19 10:44:54.987981 | orchestrator | 10:44:54.987 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-19 10:44:54.989221 | orchestrator | 10:44:54.989 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-19 10:44:54.997642 | orchestrator | 10:44:54.997 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-19 10:44:57.065348 | orchestrator | 10:44:57.064 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=93654c27-8fb4-48bd-9ec0-29b52f72c8b6] 2025-09-19 10:44:57.075419 | orchestrator | 10:44:57.075 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-19 10:44:57.081371 | orchestrator | 10:44:57.081 STDOUT terraform: local_file.inventory: Creating... 2025-09-19 10:44:57.085815 | orchestrator | 10:44:57.085 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-19 10:44:57.086755 | orchestrator | 10:44:57.086 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=793a9f130468d46e0f79acbf1e75aba0a5dd4cde] 2025-09-19 10:44:57.090097 | orchestrator | 10:44:57.089 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=e60a2f1e64a6363f295cd34a3ab1717d8e145d1c] 2025-09-19 10:44:57.915079 | orchestrator | 10:44:57.914 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=93654c27-8fb4-48bd-9ec0-29b52f72c8b6] 2025-09-19 10:45:04.980117 | orchestrator | 10:45:04.979 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-19 10:45:04.992205 | orchestrator | 10:45:04.991 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-19 10:45:04.993368 | orchestrator | 10:45:04.993 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-19 10:45:04.996592 | orchestrator | 10:45:04.996 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-19 10:45:04.997786 | orchestrator | 10:45:04.997 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-19 10:45:05.003123 | orchestrator | 10:45:05.002 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-19 10:45:14.980480 | orchestrator | 10:45:14.980 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-19 10:45:14.992864 | orchestrator | 10:45:14.992 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-19 10:45:14.993743 | orchestrator | 10:45:14.993 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-19 10:45:14.997159 | orchestrator | 10:45:14.996 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-19 10:45:14.998248 | orchestrator | 10:45:14.998 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-19 10:45:15.003626 | orchestrator | 10:45:15.003 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-19 10:45:15.578699 | orchestrator | 10:45:15.578 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=ecdb9d75-e1d8-4158-b455-b6c698683fd4] 2025-09-19 10:45:24.980891 | orchestrator | 10:45:24.980 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-19 10:45:24.994079 | orchestrator | 10:45:24.993 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-19 10:45:24.997234 | orchestrator | 10:45:24.997 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-19 10:45:24.998339 | orchestrator | 10:45:24.998 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-19 10:45:25.004591 | orchestrator | 10:45:25.004 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-09-19 10:45:25.766438 | orchestrator | 10:45:25.766 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=70322281-46e0-430d-94b8-4082713caaf1] 2025-09-19 10:45:25.852403 | orchestrator | 10:45:25.852 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=5ace7d6b-5686-4adb-9342-4af8d01e440b] 2025-09-19 10:45:25.878758 | orchestrator | 10:45:25.878 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=73784d5c-e72a-4aeb-b20c-3d13574d7f1b] 2025-09-19 10:45:26.027973 | orchestrator | 10:45:26.027 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=483f2796-de32-4414-be7c-22b7ec4fbf98] 2025-09-19 10:45:26.065427 | orchestrator | 10:45:26.065 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=774217de-638b-4b99-80fb-2f9839023865] 2025-09-19 10:45:26.079157 | orchestrator | 10:45:26.078 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-19 10:45:26.082814 | orchestrator | 10:45:26.082 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-19 10:45:26.087085 | orchestrator | 10:45:26.086 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1236554588286484456] 2025-09-19 10:45:26.112524 | orchestrator | 10:45:26.112 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-19 10:45:26.113124 | orchestrator | 10:45:26.112 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-19 10:45:26.115377 | orchestrator | 10:45:26.115 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-19 10:45:26.115995 | orchestrator | 10:45:26.115 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-19 10:45:26.116421 | orchestrator | 10:45:26.116 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-19 10:45:26.120893 | orchestrator | 10:45:26.120 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-19 10:45:26.127449 | orchestrator | 10:45:26.127 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-19 10:45:26.131810 | orchestrator | 10:45:26.130 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-19 10:45:26.132340 | orchestrator | 10:45:26.132 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-19 10:45:29.474901 | orchestrator | 10:45:29.474 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=73784d5c-e72a-4aeb-b20c-3d13574d7f1b/2859ea6e-5cf3-4595-8353-f67711d21d4e] 2025-09-19 10:45:29.539743 | orchestrator | 10:45:29.539 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=ecdb9d75-e1d8-4158-b455-b6c698683fd4/23c8bdec-2f7a-480a-98d1-592cee3b582b] 2025-09-19 10:45:29.555207 | orchestrator | 10:45:29.554 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=73784d5c-e72a-4aeb-b20c-3d13574d7f1b/ff354216-c1d2-4110-b9e3-f4cf06b21a62] 2025-09-19 10:45:29.571201 | orchestrator | 10:45:29.570 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=483f2796-de32-4414-be7c-22b7ec4fbf98/2d05b72c-4493-4412-ad25-c0b6cbf3de12] 2025-09-19 10:45:29.576851 | orchestrator | 10:45:29.576 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=ecdb9d75-e1d8-4158-b455-b6c698683fd4/82c12b62-ffbd-484b-a107-b043e35ec15c] 2025-09-19 10:45:29.603636 | orchestrator | 10:45:29.603 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=73784d5c-e72a-4aeb-b20c-3d13574d7f1b/729b54dd-f4c1-4a98-9e39-7aa2dbdf058c] 2025-09-19 10:45:29.745915 | orchestrator | 10:45:29.745 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=483f2796-de32-4414-be7c-22b7ec4fbf98/a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c] 2025-09-19 10:45:35.683023 | orchestrator | 10:45:35.682 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=483f2796-de32-4414-be7c-22b7ec4fbf98/a7da52da-8ff9-443f-9c01-2997209c642a] 2025-09-19 10:45:35.693205 | orchestrator | 10:45:35.692 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 10s [id=ecdb9d75-e1d8-4158-b455-b6c698683fd4/4ab3eba9-7f04-4545-b862-1d19a7d78b14] 2025-09-19 10:45:36.123561 | orchestrator | 10:45:36.123 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-19 10:45:46.124449 | orchestrator | 10:45:46.124 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-19 10:45:46.535730 | orchestrator | 10:45:46.535 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=5645cea1-6159-4fc2-8604-ff0f83c97416] 2025-09-19 10:45:46.559659 | orchestrator | 10:45:46.559 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-19 10:45:46.559769 | orchestrator | 10:45:46.559 STDOUT terraform: Outputs: 2025-09-19 10:45:46.559780 | orchestrator | 10:45:46.559 STDOUT terraform: manager_address = 2025-09-19 10:45:46.559806 | orchestrator | 10:45:46.559 STDOUT terraform: private_key = 2025-09-19 10:45:46.888694 | orchestrator | ok: Runtime: 0:01:08.365334 2025-09-19 10:45:46.932672 | 2025-09-19 10:45:46.932838 | TASK [Fetch manager address] 2025-09-19 10:45:47.385236 | orchestrator | ok 2025-09-19 10:45:47.394986 | 2025-09-19 10:45:47.395112 | TASK [Set manager_host address] 2025-09-19 10:45:47.474210 | orchestrator | ok 2025-09-19 10:45:47.483435 | 2025-09-19 10:45:47.483585 | LOOP [Update ansible collections] 2025-09-19 10:45:49.214633 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 10:45:49.215298 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-19 10:45:49.215355 | orchestrator | Starting galaxy collection install process 2025-09-19 10:45:49.215381 | orchestrator | Process install dependency map 2025-09-19 10:45:49.215403 | orchestrator | Starting collection install process 2025-09-19 10:45:49.215438 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2025-09-19 10:45:49.215464 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2025-09-19 10:45:49.215490 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-19 10:45:49.215545 | orchestrator | ok: Item: commons Runtime: 0:00:01.439839 2025-09-19 10:45:50.182604 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 10:45:50.182770 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-19 10:45:50.182821 | orchestrator | Starting galaxy collection install process 2025-09-19 10:45:50.182891 | orchestrator | Process install dependency map 2025-09-19 10:45:50.182928 | orchestrator | Starting collection install process 2025-09-19 10:45:50.182961 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2025-09-19 10:45:50.182994 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2025-09-19 10:45:50.183026 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-19 10:45:50.183075 | orchestrator | ok: Item: services Runtime: 0:00:00.710328 2025-09-19 10:45:50.207382 | 2025-09-19 10:45:50.207550 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-19 10:46:00.766303 | orchestrator | ok 2025-09-19 10:46:00.775866 | 2025-09-19 10:46:00.775976 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-19 10:47:00.817598 | orchestrator | ok 2025-09-19 10:47:00.828118 | 2025-09-19 10:47:00.828291 | TASK [Fetch manager ssh hostkey] 2025-09-19 10:47:02.409355 | orchestrator | Output suppressed because no_log was given 2025-09-19 10:47:02.424963 | 2025-09-19 10:47:02.425122 | TASK [Get ssh keypair from terraform environment] 2025-09-19 10:47:02.961826 | orchestrator | ok: Runtime: 0:00:00.006260 2025-09-19 10:47:02.978279 | 2025-09-19 10:47:02.978445 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-19 10:47:03.016421 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-19 10:47:03.025628 | 2025-09-19 10:47:03.025757 | TASK [Run manager part 0] 2025-09-19 10:47:03.908716 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 10:47:03.951775 | orchestrator | 2025-09-19 10:47:03.951848 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-19 10:47:03.951856 | orchestrator | 2025-09-19 10:47:03.951869 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-19 10:47:05.743187 | orchestrator | ok: [testbed-manager] 2025-09-19 10:47:05.743258 | orchestrator | 2025-09-19 10:47:05.743291 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-19 10:47:05.743305 | orchestrator | 2025-09-19 10:47:05.743328 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 10:47:07.518236 | orchestrator | ok: [testbed-manager] 2025-09-19 10:47:07.518364 | orchestrator | 2025-09-19 10:47:07.518385 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-19 10:47:08.158999 | orchestrator | ok: [testbed-manager] 2025-09-19 10:47:08.159043 | orchestrator | 2025-09-19 10:47:08.159050 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-19 10:47:08.437334 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:08.437416 | orchestrator | 2025-09-19 10:47:08.437426 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-19 10:47:08.459955 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:08.459998 | orchestrator | 2025-09-19 10:47:08.460006 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-19 10:47:08.492283 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:08.492344 | orchestrator | 2025-09-19 10:47:08.492354 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-19 10:47:08.521868 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:08.521925 | orchestrator | 2025-09-19 10:47:08.521935 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-19 10:47:08.561456 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:08.561506 | orchestrator | 2025-09-19 10:47:08.561516 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-19 10:47:08.598112 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:08.598171 | orchestrator | 2025-09-19 10:47:08.598184 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-19 10:47:08.638366 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:47:08.638428 | orchestrator | 2025-09-19 10:47:08.638441 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-19 10:47:09.415877 | orchestrator | changed: [testbed-manager] 2025-09-19 10:47:09.415965 | orchestrator | 2025-09-19 10:47:09.415979 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-19 10:49:42.400682 | orchestrator | changed: [testbed-manager] 2025-09-19 10:49:42.400749 | orchestrator | 2025-09-19 10:49:42.400765 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-19 10:51:08.459317 | orchestrator | changed: [testbed-manager] 2025-09-19 10:51:08.459414 | orchestrator | 2025-09-19 10:51:08.459431 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-19 10:51:28.293408 | orchestrator | changed: [testbed-manager] 2025-09-19 10:51:28.293499 | orchestrator | 2025-09-19 10:51:28.293518 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-19 10:51:37.235990 | orchestrator | changed: [testbed-manager] 2025-09-19 10:51:37.236078 | orchestrator | 2025-09-19 10:51:37.236094 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-19 10:51:37.286527 | orchestrator | ok: [testbed-manager] 2025-09-19 10:51:37.286608 | orchestrator | 2025-09-19 10:51:37.286624 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-19 10:51:38.078406 | orchestrator | ok: [testbed-manager] 2025-09-19 10:51:38.078453 | orchestrator | 2025-09-19 10:51:38.078465 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-19 10:51:38.820989 | orchestrator | changed: [testbed-manager] 2025-09-19 10:51:38.821023 | orchestrator | 2025-09-19 10:51:38.821030 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-19 10:51:45.208276 | orchestrator | changed: [testbed-manager] 2025-09-19 10:51:45.208371 | orchestrator | 2025-09-19 10:51:45.208411 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-19 10:51:51.286181 | orchestrator | changed: [testbed-manager] 2025-09-19 10:51:51.286326 | orchestrator | 2025-09-19 10:51:51.286347 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-19 10:51:54.012045 | orchestrator | changed: [testbed-manager] 2025-09-19 10:51:54.012136 | orchestrator | 2025-09-19 10:51:54.012153 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-19 10:51:55.767968 | orchestrator | changed: [testbed-manager] 2025-09-19 10:51:55.768016 | orchestrator | 2025-09-19 10:51:55.768025 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-19 10:51:56.919997 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-19 10:51:56.920084 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-19 10:51:56.920099 | orchestrator | 2025-09-19 10:51:56.920111 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-19 10:51:56.962686 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-19 10:51:56.962795 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-19 10:51:56.962812 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-19 10:51:56.962826 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-19 10:52:01.014933 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-19 10:52:01.015025 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-19 10:52:01.015040 | orchestrator | 2025-09-19 10:52:01.015053 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-19 10:52:01.610791 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:01.610835 | orchestrator | 2025-09-19 10:52:01.610844 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-19 10:52:20.154012 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-19 10:52:20.154158 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-19 10:52:20.154211 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-19 10:52:20.154224 | orchestrator | 2025-09-19 10:52:20.154236 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-19 10:52:22.570931 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-19 10:52:22.571033 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-19 10:52:22.571057 | orchestrator | 2025-09-19 10:52:22.571076 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-19 10:52:22.571096 | orchestrator | 2025-09-19 10:52:22.571114 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 10:52:23.957343 | orchestrator | ok: [testbed-manager] 2025-09-19 10:52:23.957381 | orchestrator | 2025-09-19 10:52:23.957388 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-19 10:52:24.006388 | orchestrator | ok: [testbed-manager] 2025-09-19 10:52:24.006430 | orchestrator | 2025-09-19 10:52:24.006438 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-19 10:52:24.076030 | orchestrator | ok: [testbed-manager] 2025-09-19 10:52:24.076074 | orchestrator | 2025-09-19 10:52:24.076082 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-19 10:52:24.852877 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:24.852987 | orchestrator | 2025-09-19 10:52:24.853004 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-19 10:52:25.587934 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:25.588050 | orchestrator | 2025-09-19 10:52:25.588067 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-19 10:52:26.975963 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-19 10:52:26.976076 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-19 10:52:26.976093 | orchestrator | 2025-09-19 10:52:26.976127 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-19 10:52:28.349307 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:28.349415 | orchestrator | 2025-09-19 10:52:28.349432 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-19 10:52:30.095939 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 10:52:30.096030 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-19 10:52:30.096045 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-19 10:52:30.096057 | orchestrator | 2025-09-19 10:52:30.096070 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-19 10:52:30.152697 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:52:30.152749 | orchestrator | 2025-09-19 10:52:30.152756 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-19 10:52:30.724812 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:30.724900 | orchestrator | 2025-09-19 10:52:30.724917 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-19 10:52:30.794586 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:52:30.794635 | orchestrator | 2025-09-19 10:52:30.794641 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-19 10:52:31.639195 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 10:52:31.639279 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:31.639296 | orchestrator | 2025-09-19 10:52:31.639308 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-19 10:52:31.678458 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:52:31.678550 | orchestrator | 2025-09-19 10:52:31.678566 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-19 10:52:31.719640 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:52:31.719710 | orchestrator | 2025-09-19 10:52:31.719721 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-19 10:52:31.757764 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:52:31.757821 | orchestrator | 2025-09-19 10:52:31.757829 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-19 10:52:31.807978 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:52:31.808040 | orchestrator | 2025-09-19 10:52:31.808049 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-19 10:52:32.521321 | orchestrator | ok: [testbed-manager] 2025-09-19 10:52:32.521407 | orchestrator | 2025-09-19 10:52:32.521423 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-19 10:52:32.521436 | orchestrator | 2025-09-19 10:52:32.521447 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 10:52:33.874372 | orchestrator | ok: [testbed-manager] 2025-09-19 10:52:33.874460 | orchestrator | 2025-09-19 10:52:33.874475 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-19 10:52:34.848450 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:34.848538 | orchestrator | 2025-09-19 10:52:34.848555 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 10:52:34.848569 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-19 10:52:34.848581 | orchestrator | 2025-09-19 10:52:35.293448 | orchestrator | ok: Runtime: 0:05:31.637863 2025-09-19 10:52:35.310550 | 2025-09-19 10:52:35.310681 | TASK [Point out that the log in on the manager is now possible] 2025-09-19 10:52:35.353554 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-19 10:52:35.365542 | 2025-09-19 10:52:35.365664 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-19 10:52:35.399521 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-19 10:52:35.407516 | 2025-09-19 10:52:35.407617 | TASK [Run manager part 1 + 2] 2025-09-19 10:52:36.265521 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 10:52:36.322895 | orchestrator | 2025-09-19 10:52:36.322946 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-19 10:52:36.322953 | orchestrator | 2025-09-19 10:52:36.322966 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 10:52:38.816667 | orchestrator | ok: [testbed-manager] 2025-09-19 10:52:38.816718 | orchestrator | 2025-09-19 10:52:38.816739 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-19 10:52:38.849969 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:52:38.850038 | orchestrator | 2025-09-19 10:52:38.850049 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-19 10:52:38.882635 | orchestrator | ok: [testbed-manager] 2025-09-19 10:52:38.882683 | orchestrator | 2025-09-19 10:52:38.882691 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 10:52:38.920316 | orchestrator | ok: [testbed-manager] 2025-09-19 10:52:38.920386 | orchestrator | 2025-09-19 10:52:38.920401 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 10:52:38.990226 | orchestrator | ok: [testbed-manager] 2025-09-19 10:52:38.990309 | orchestrator | 2025-09-19 10:52:38.990325 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 10:52:39.055988 | orchestrator | ok: [testbed-manager] 2025-09-19 10:52:39.056072 | orchestrator | 2025-09-19 10:52:39.056089 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 10:52:39.110787 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-19 10:52:39.110879 | orchestrator | 2025-09-19 10:52:39.110896 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 10:52:39.876196 | orchestrator | ok: [testbed-manager] 2025-09-19 10:52:39.876331 | orchestrator | 2025-09-19 10:52:39.876351 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 10:52:39.927049 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:52:39.927165 | orchestrator | 2025-09-19 10:52:39.927184 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 10:52:41.250075 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:41.250174 | orchestrator | 2025-09-19 10:52:41.250194 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 10:52:41.790713 | orchestrator | ok: [testbed-manager] 2025-09-19 10:52:41.790789 | orchestrator | 2025-09-19 10:52:41.790804 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 10:52:42.872757 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:42.872821 | orchestrator | 2025-09-19 10:52:42.872838 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 10:52:58.073370 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:58.073452 | orchestrator | 2025-09-19 10:52:58.073468 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-19 10:52:58.726851 | orchestrator | ok: [testbed-manager] 2025-09-19 10:52:58.726885 | orchestrator | 2025-09-19 10:52:58.726895 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-19 10:52:58.777279 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:52:58.777314 | orchestrator | 2025-09-19 10:52:58.777322 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-19 10:52:59.709407 | orchestrator | changed: [testbed-manager] 2025-09-19 10:52:59.709447 | orchestrator | 2025-09-19 10:52:59.709454 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-19 10:53:00.673494 | orchestrator | changed: [testbed-manager] 2025-09-19 10:53:00.673573 | orchestrator | 2025-09-19 10:53:00.673587 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-19 10:53:01.227514 | orchestrator | changed: [testbed-manager] 2025-09-19 10:53:01.227552 | orchestrator | 2025-09-19 10:53:01.227558 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-19 10:53:01.268080 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-19 10:53:01.268222 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-19 10:53:01.268238 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-19 10:53:01.268252 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-19 10:53:04.016561 | orchestrator | changed: [testbed-manager] 2025-09-19 10:53:04.016654 | orchestrator | 2025-09-19 10:53:04.016671 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-19 10:53:13.054484 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-19 10:53:13.054532 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-19 10:53:13.054540 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-19 10:53:13.054547 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-19 10:53:13.054557 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-19 10:53:13.054563 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-19 10:53:13.054569 | orchestrator | 2025-09-19 10:53:13.054575 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-19 10:53:14.127274 | orchestrator | changed: [testbed-manager] 2025-09-19 10:53:14.127344 | orchestrator | 2025-09-19 10:53:14.127353 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-19 10:53:14.167235 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:53:14.167329 | orchestrator | 2025-09-19 10:53:14.167345 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-19 10:53:17.448112 | orchestrator | changed: [testbed-manager] 2025-09-19 10:53:17.448174 | orchestrator | 2025-09-19 10:53:17.448181 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-19 10:53:17.487208 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:53:17.487261 | orchestrator | 2025-09-19 10:53:17.487268 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-19 10:54:57.991951 | orchestrator | changed: [testbed-manager] 2025-09-19 10:54:57.992056 | orchestrator | 2025-09-19 10:54:57.992077 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 10:54:59.130968 | orchestrator | ok: [testbed-manager] 2025-09-19 10:54:59.131010 | orchestrator | 2025-09-19 10:54:59.131017 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 10:54:59.131024 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-19 10:54:59.131029 | orchestrator | 2025-09-19 10:54:59.513440 | orchestrator | ok: Runtime: 0:02:23.507427 2025-09-19 10:54:59.530325 | 2025-09-19 10:54:59.530455 | TASK [Reboot manager] 2025-09-19 10:55:01.066290 | orchestrator | ok: Runtime: 0:00:00.964939 2025-09-19 10:55:01.086076 | 2025-09-19 10:55:01.086254 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-19 10:55:15.415350 | orchestrator | ok 2025-09-19 10:55:15.426468 | 2025-09-19 10:55:15.427817 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-19 10:56:15.478366 | orchestrator | ok 2025-09-19 10:56:15.488766 | 2025-09-19 10:56:15.488893 | TASK [Deploy manager + bootstrap nodes] 2025-09-19 10:56:18.101946 | orchestrator | 2025-09-19 10:56:18.102140 | orchestrator | # DEPLOY MANAGER 2025-09-19 10:56:18.102153 | orchestrator | 2025-09-19 10:56:18.102159 | orchestrator | + set -e 2025-09-19 10:56:18.102165 | orchestrator | + echo 2025-09-19 10:56:18.102170 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-19 10:56:18.102178 | orchestrator | + echo 2025-09-19 10:56:18.102201 | orchestrator | + cat /opt/manager-vars.sh 2025-09-19 10:56:18.107800 | orchestrator | export NUMBER_OF_NODES=6 2025-09-19 10:56:18.107859 | orchestrator | 2025-09-19 10:56:18.107866 | orchestrator | export CEPH_VERSION=reef 2025-09-19 10:56:18.107872 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-19 10:56:18.107877 | orchestrator | export MANAGER_VERSION=9.2.0 2025-09-19 10:56:18.107890 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-19 10:56:18.107894 | orchestrator | 2025-09-19 10:56:18.107902 | orchestrator | export ARA=false 2025-09-19 10:56:18.107906 | orchestrator | export DEPLOY_MODE=manager 2025-09-19 10:56:18.107914 | orchestrator | export TEMPEST=false 2025-09-19 10:56:18.107918 | orchestrator | export IS_ZUUL=true 2025-09-19 10:56:18.107922 | orchestrator | 2025-09-19 10:56:18.107930 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-09-19 10:56:18.107935 | orchestrator | export EXTERNAL_API=false 2025-09-19 10:56:18.107939 | orchestrator | 2025-09-19 10:56:18.107943 | orchestrator | export IMAGE_USER=ubuntu 2025-09-19 10:56:18.107950 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-19 10:56:18.107954 | orchestrator | 2025-09-19 10:56:18.107958 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-19 10:56:18.107967 | orchestrator | 2025-09-19 10:56:18.107972 | orchestrator | + echo 2025-09-19 10:56:18.107979 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 10:56:18.109120 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 10:56:18.109146 | orchestrator | ++ INTERACTIVE=false 2025-09-19 10:56:18.109152 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 10:56:18.109158 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 10:56:18.109499 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 10:56:18.109533 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 10:56:18.109539 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 10:56:18.109543 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 10:56:18.109547 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 10:56:18.109552 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 10:56:18.109556 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 10:56:18.109585 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 10:56:18.109604 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 10:56:18.109609 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 10:56:18.109620 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 10:56:18.109647 | orchestrator | ++ export ARA=false 2025-09-19 10:56:18.109652 | orchestrator | ++ ARA=false 2025-09-19 10:56:18.109981 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 10:56:18.109988 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 10:56:18.109992 | orchestrator | ++ export TEMPEST=false 2025-09-19 10:56:18.109996 | orchestrator | ++ TEMPEST=false 2025-09-19 10:56:18.110000 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 10:56:18.110003 | orchestrator | ++ IS_ZUUL=true 2025-09-19 10:56:18.110007 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-09-19 10:56:18.110034 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-09-19 10:56:18.110039 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 10:56:18.110043 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 10:56:18.110047 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 10:56:18.110051 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 10:56:18.110058 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 10:56:18.110062 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 10:56:18.110066 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 10:56:18.110070 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 10:56:18.110075 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-19 10:56:18.180248 | orchestrator | + docker version 2025-09-19 10:56:18.424057 | orchestrator | Client: Docker Engine - Community 2025-09-19 10:56:18.424132 | orchestrator | Version: 27.5.1 2025-09-19 10:56:18.424143 | orchestrator | API version: 1.47 2025-09-19 10:56:18.424151 | orchestrator | Go version: go1.22.11 2025-09-19 10:56:18.424157 | orchestrator | Git commit: 9f9e405 2025-09-19 10:56:18.424164 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-19 10:56:18.424171 | orchestrator | OS/Arch: linux/amd64 2025-09-19 10:56:18.424178 | orchestrator | Context: default 2025-09-19 10:56:18.424184 | orchestrator | 2025-09-19 10:56:18.424191 | orchestrator | Server: Docker Engine - Community 2025-09-19 10:56:18.424197 | orchestrator | Engine: 2025-09-19 10:56:18.424204 | orchestrator | Version: 27.5.1 2025-09-19 10:56:18.424210 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-19 10:56:18.424238 | orchestrator | Go version: go1.22.11 2025-09-19 10:56:18.424245 | orchestrator | Git commit: 4c9b3b0 2025-09-19 10:56:18.424251 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-19 10:56:18.424257 | orchestrator | OS/Arch: linux/amd64 2025-09-19 10:56:18.424263 | orchestrator | Experimental: false 2025-09-19 10:56:18.424270 | orchestrator | containerd: 2025-09-19 10:56:18.424276 | orchestrator | Version: 1.7.27 2025-09-19 10:56:18.424282 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-19 10:56:18.424289 | orchestrator | runc: 2025-09-19 10:56:18.424303 | orchestrator | Version: 1.2.5 2025-09-19 10:56:18.424310 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-19 10:56:18.424316 | orchestrator | docker-init: 2025-09-19 10:56:18.424549 | orchestrator | Version: 0.19.0 2025-09-19 10:56:18.424562 | orchestrator | GitCommit: de40ad0 2025-09-19 10:56:18.429440 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-19 10:56:18.438079 | orchestrator | + set -e 2025-09-19 10:56:18.438526 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 10:56:18.438549 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 10:56:18.438561 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 10:56:18.438572 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 10:56:18.438583 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 10:56:18.438646 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 10:56:18.438659 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 10:56:18.438670 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 10:56:18.438681 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 10:56:18.438692 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 10:56:18.438702 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 10:56:18.438713 | orchestrator | ++ export ARA=false 2025-09-19 10:56:18.438725 | orchestrator | ++ ARA=false 2025-09-19 10:56:18.438766 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 10:56:18.438778 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 10:56:18.438789 | orchestrator | ++ export TEMPEST=false 2025-09-19 10:56:18.438799 | orchestrator | ++ TEMPEST=false 2025-09-19 10:56:18.438810 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 10:56:18.438820 | orchestrator | ++ IS_ZUUL=true 2025-09-19 10:56:18.438831 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-09-19 10:56:18.438842 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-09-19 10:56:18.438853 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 10:56:18.438863 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 10:56:18.438874 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 10:56:18.438884 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 10:56:18.438895 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 10:56:18.438906 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 10:56:18.438917 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 10:56:18.438927 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 10:56:18.438938 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 10:56:18.438949 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 10:56:18.438959 | orchestrator | ++ INTERACTIVE=false 2025-09-19 10:56:18.438970 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 10:56:18.438986 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 10:56:18.439003 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-19 10:56:18.439014 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.2.0 2025-09-19 10:56:18.446145 | orchestrator | + set -e 2025-09-19 10:56:18.446181 | orchestrator | + VERSION=9.2.0 2025-09-19 10:56:18.446194 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.2.0/g' /opt/configuration/environments/manager/configuration.yml 2025-09-19 10:56:18.453599 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-19 10:56:18.453628 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-09-19 10:56:18.457263 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-09-19 10:56:18.462495 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-09-19 10:56:18.471967 | orchestrator | /opt/configuration ~ 2025-09-19 10:56:18.472009 | orchestrator | + set -e 2025-09-19 10:56:18.472023 | orchestrator | + pushd /opt/configuration 2025-09-19 10:56:18.472035 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 10:56:18.473620 | orchestrator | + source /opt/venv/bin/activate 2025-09-19 10:56:18.474919 | orchestrator | ++ deactivate nondestructive 2025-09-19 10:56:18.475002 | orchestrator | ++ '[' -n '' ']' 2025-09-19 10:56:18.475018 | orchestrator | ++ '[' -n '' ']' 2025-09-19 10:56:18.475061 | orchestrator | ++ hash -r 2025-09-19 10:56:18.475084 | orchestrator | ++ '[' -n '' ']' 2025-09-19 10:56:18.475095 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-19 10:56:18.475106 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-19 10:56:18.475117 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-19 10:56:18.475140 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-19 10:56:18.475151 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-19 10:56:18.475162 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-19 10:56:18.475173 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-19 10:56:18.475188 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 10:56:18.475348 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 10:56:18.475390 | orchestrator | ++ export PATH 2025-09-19 10:56:18.475402 | orchestrator | ++ '[' -n '' ']' 2025-09-19 10:56:18.475590 | orchestrator | ++ '[' -z '' ']' 2025-09-19 10:56:18.475606 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-19 10:56:18.475617 | orchestrator | ++ PS1='(venv) ' 2025-09-19 10:56:18.475629 | orchestrator | ++ export PS1 2025-09-19 10:56:18.475643 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-19 10:56:18.475662 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-19 10:56:18.475672 | orchestrator | ++ hash -r 2025-09-19 10:56:18.475769 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-09-19 10:56:19.560098 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-09-19 10:56:19.561027 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2025-09-19 10:56:19.562383 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-09-19 10:56:19.563839 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-09-19 10:56:19.564974 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-09-19 10:56:19.575136 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.0) 2025-09-19 10:56:19.576508 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-09-19 10:56:19.577762 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2025-09-19 10:56:19.579269 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-09-19 10:56:19.610973 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.3) 2025-09-19 10:56:19.612460 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-09-19 10:56:19.614426 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-09-19 10:56:19.615555 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.8.3) 2025-09-19 10:56:19.619952 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-09-19 10:56:19.821213 | orchestrator | ++ which gilt 2025-09-19 10:56:19.824318 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-09-19 10:56:19.824375 | orchestrator | + /opt/venv/bin/gilt overlay 2025-09-19 10:56:20.058874 | orchestrator | osism.cfg-generics: 2025-09-19 10:56:20.220166 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-09-19 10:56:20.220268 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-09-19 10:56:20.220370 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-09-19 10:56:20.220475 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-09-19 10:56:20.819473 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-09-19 10:56:20.826178 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-09-19 10:56:21.158624 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-09-19 10:56:21.212202 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 10:56:21.212304 | orchestrator | + deactivate 2025-09-19 10:56:21.212318 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-19 10:56:21.212332 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 10:56:21.212343 | orchestrator | + export PATH 2025-09-19 10:56:21.212354 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-19 10:56:21.212366 | orchestrator | + '[' -n '' ']' 2025-09-19 10:56:21.212380 | orchestrator | + hash -r 2025-09-19 10:56:21.212409 | orchestrator | + '[' -n '' ']' 2025-09-19 10:56:21.212420 | orchestrator | + unset VIRTUAL_ENV 2025-09-19 10:56:21.212431 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-19 10:56:21.212442 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-19 10:56:21.212610 | orchestrator | ~ 2025-09-19 10:56:21.212627 | orchestrator | + unset -f deactivate 2025-09-19 10:56:21.212639 | orchestrator | + popd 2025-09-19 10:56:21.215018 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-09-19 10:56:21.215140 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-19 10:56:21.216261 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-19 10:56:21.283455 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-19 10:56:21.283542 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-19 10:56:21.283556 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-19 10:56:21.385880 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 10:56:21.386002 | orchestrator | + source /opt/venv/bin/activate 2025-09-19 10:56:21.386089 | orchestrator | ++ deactivate nondestructive 2025-09-19 10:56:21.386113 | orchestrator | ++ '[' -n '' ']' 2025-09-19 10:56:21.386143 | orchestrator | ++ '[' -n '' ']' 2025-09-19 10:56:21.386163 | orchestrator | ++ hash -r 2025-09-19 10:56:21.386182 | orchestrator | ++ '[' -n '' ']' 2025-09-19 10:56:21.386200 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-19 10:56:21.386232 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-19 10:56:21.386252 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-19 10:56:21.386289 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-19 10:56:21.386307 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-19 10:56:21.386318 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-19 10:56:21.386329 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-19 10:56:21.386341 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 10:56:21.386353 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 10:56:21.386385 | orchestrator | ++ export PATH 2025-09-19 10:56:21.386396 | orchestrator | ++ '[' -n '' ']' 2025-09-19 10:56:21.386407 | orchestrator | ++ '[' -z '' ']' 2025-09-19 10:56:21.386418 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-19 10:56:21.386428 | orchestrator | ++ PS1='(venv) ' 2025-09-19 10:56:21.386439 | orchestrator | ++ export PS1 2025-09-19 10:56:21.386450 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-19 10:56:21.386460 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-19 10:56:21.386471 | orchestrator | ++ hash -r 2025-09-19 10:56:21.386482 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-19 10:56:22.472676 | orchestrator | 2025-09-19 10:56:22.472802 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-19 10:56:22.472818 | orchestrator | 2025-09-19 10:56:22.472830 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 10:56:23.053509 | orchestrator | ok: [testbed-manager] 2025-09-19 10:56:23.053611 | orchestrator | 2025-09-19 10:56:23.053628 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-19 10:56:24.059951 | orchestrator | changed: [testbed-manager] 2025-09-19 10:56:24.060055 | orchestrator | 2025-09-19 10:56:24.060072 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-19 10:56:24.060084 | orchestrator | 2025-09-19 10:56:24.060095 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 10:56:27.303464 | orchestrator | ok: [testbed-manager] 2025-09-19 10:56:27.303575 | orchestrator | 2025-09-19 10:56:27.303591 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-19 10:56:27.361229 | orchestrator | ok: [testbed-manager] 2025-09-19 10:56:27.361318 | orchestrator | 2025-09-19 10:56:27.361334 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-19 10:56:27.846225 | orchestrator | changed: [testbed-manager] 2025-09-19 10:56:27.846322 | orchestrator | 2025-09-19 10:56:27.846339 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-19 10:56:27.888519 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:56:27.888593 | orchestrator | 2025-09-19 10:56:27.888605 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-19 10:56:28.218162 | orchestrator | changed: [testbed-manager] 2025-09-19 10:56:28.218262 | orchestrator | 2025-09-19 10:56:28.218277 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-19 10:56:28.279072 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:56:28.279164 | orchestrator | 2025-09-19 10:56:28.279179 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-19 10:56:28.631692 | orchestrator | ok: [testbed-manager] 2025-09-19 10:56:28.631838 | orchestrator | 2025-09-19 10:56:28.631860 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-19 10:56:28.760199 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:56:28.760274 | orchestrator | 2025-09-19 10:56:28.760282 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-19 10:56:28.760290 | orchestrator | 2025-09-19 10:56:28.760296 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 10:56:30.519889 | orchestrator | ok: [testbed-manager] 2025-09-19 10:56:30.519993 | orchestrator | 2025-09-19 10:56:30.520010 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-19 10:56:30.620257 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-19 10:56:30.620342 | orchestrator | 2025-09-19 10:56:30.620356 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-19 10:56:30.677775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-19 10:56:30.677874 | orchestrator | 2025-09-19 10:56:30.677898 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-19 10:56:31.800252 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-19 10:56:31.800347 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-19 10:56:31.800365 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-19 10:56:31.800377 | orchestrator | 2025-09-19 10:56:31.800389 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-19 10:56:33.657861 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-19 10:56:33.657973 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-19 10:56:33.657994 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-19 10:56:33.658011 | orchestrator | 2025-09-19 10:56:33.658100 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-19 10:56:34.319593 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 10:56:34.319689 | orchestrator | changed: [testbed-manager] 2025-09-19 10:56:34.319707 | orchestrator | 2025-09-19 10:56:34.319758 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-19 10:56:35.003238 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 10:56:35.003362 | orchestrator | changed: [testbed-manager] 2025-09-19 10:56:35.003391 | orchestrator | 2025-09-19 10:56:35.003411 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-19 10:56:35.057933 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:56:35.058141 | orchestrator | 2025-09-19 10:56:35.058168 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-19 10:56:35.424750 | orchestrator | ok: [testbed-manager] 2025-09-19 10:56:35.424855 | orchestrator | 2025-09-19 10:56:35.424872 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-19 10:56:35.495098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-19 10:56:35.495169 | orchestrator | 2025-09-19 10:56:35.495177 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-19 10:56:36.564645 | orchestrator | changed: [testbed-manager] 2025-09-19 10:56:36.564759 | orchestrator | 2025-09-19 10:56:36.564774 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-19 10:56:37.366001 | orchestrator | changed: [testbed-manager] 2025-09-19 10:56:37.366161 | orchestrator | 2025-09-19 10:56:37.366188 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-19 10:56:49.093127 | orchestrator | changed: [testbed-manager] 2025-09-19 10:56:49.093223 | orchestrator | 2025-09-19 10:56:49.093257 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-19 10:56:49.152617 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:56:49.152779 | orchestrator | 2025-09-19 10:56:49.152797 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-19 10:56:49.152809 | orchestrator | 2025-09-19 10:56:49.152819 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 10:56:50.966467 | orchestrator | ok: [testbed-manager] 2025-09-19 10:56:50.966575 | orchestrator | 2025-09-19 10:56:50.966592 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-19 10:56:51.086276 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-19 10:56:51.086397 | orchestrator | 2025-09-19 10:56:51.086424 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-19 10:56:51.154729 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 10:56:51.154802 | orchestrator | 2025-09-19 10:56:51.154813 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-19 10:56:53.935893 | orchestrator | ok: [testbed-manager] 2025-09-19 10:56:53.935991 | orchestrator | 2025-09-19 10:56:53.936007 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-19 10:56:53.985270 | orchestrator | ok: [testbed-manager] 2025-09-19 10:56:53.985346 | orchestrator | 2025-09-19 10:56:53.985356 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-19 10:56:54.133811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-19 10:56:54.133891 | orchestrator | 2025-09-19 10:56:54.133904 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-19 10:56:57.021309 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-19 10:56:57.021413 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-19 10:56:57.021428 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-19 10:56:57.021440 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-19 10:56:57.021452 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-19 10:56:57.021463 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-19 10:56:57.021475 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-19 10:56:57.021486 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-19 10:56:57.021498 | orchestrator | 2025-09-19 10:56:57.021514 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-19 10:56:57.666511 | orchestrator | changed: [testbed-manager] 2025-09-19 10:56:57.666618 | orchestrator | 2025-09-19 10:56:57.666635 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-19 10:56:58.305181 | orchestrator | changed: [testbed-manager] 2025-09-19 10:56:58.305296 | orchestrator | 2025-09-19 10:56:58.305317 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-19 10:56:58.387179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-19 10:56:58.387265 | orchestrator | 2025-09-19 10:56:58.387279 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-19 10:56:59.611334 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-19 10:56:59.611415 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-19 10:56:59.611428 | orchestrator | 2025-09-19 10:56:59.611438 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-19 10:57:00.249930 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:00.250078 | orchestrator | 2025-09-19 10:57:00.250096 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-19 10:57:00.298159 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:57:00.298249 | orchestrator | 2025-09-19 10:57:00.298265 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-19 10:57:00.377063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-19 10:57:00.377139 | orchestrator | 2025-09-19 10:57:00.377149 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-19 10:57:00.995986 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:00.996096 | orchestrator | 2025-09-19 10:57:00.996112 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-19 10:57:01.066521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-19 10:57:01.066604 | orchestrator | 2025-09-19 10:57:01.066620 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-19 10:57:02.424551 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 10:57:02.424777 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 10:57:02.424796 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:02.424810 | orchestrator | 2025-09-19 10:57:02.424822 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-19 10:57:03.049365 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:03.049475 | orchestrator | 2025-09-19 10:57:03.050204 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-19 10:57:03.096370 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:57:03.096472 | orchestrator | 2025-09-19 10:57:03.096500 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-19 10:57:03.180914 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-19 10:57:03.181004 | orchestrator | 2025-09-19 10:57:03.181018 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-19 10:57:03.719570 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:03.719661 | orchestrator | 2025-09-19 10:57:03.719702 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-19 10:57:04.138854 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:04.138950 | orchestrator | 2025-09-19 10:57:04.138965 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-19 10:57:05.397223 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-19 10:57:05.397299 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-19 10:57:05.397308 | orchestrator | 2025-09-19 10:57:05.397317 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-19 10:57:06.031332 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:06.031432 | orchestrator | 2025-09-19 10:57:06.031448 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-19 10:57:06.443687 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:06.443780 | orchestrator | 2025-09-19 10:57:06.443796 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-19 10:57:06.797348 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:06.797436 | orchestrator | 2025-09-19 10:57:06.797450 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-19 10:57:06.846443 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:57:06.846520 | orchestrator | 2025-09-19 10:57:06.846533 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-19 10:57:06.919109 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-19 10:57:06.919242 | orchestrator | 2025-09-19 10:57:06.919268 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-19 10:57:06.967562 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:06.967707 | orchestrator | 2025-09-19 10:57:06.967733 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-19 10:57:09.009477 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-19 10:57:09.009596 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-19 10:57:09.009613 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-19 10:57:09.009625 | orchestrator | 2025-09-19 10:57:09.009638 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-19 10:57:09.735255 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:09.735352 | orchestrator | 2025-09-19 10:57:09.735369 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-19 10:57:10.451959 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:10.452057 | orchestrator | 2025-09-19 10:57:10.452072 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-19 10:57:11.145886 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:11.145972 | orchestrator | 2025-09-19 10:57:11.145983 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-19 10:57:11.228348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-19 10:57:11.228456 | orchestrator | 2025-09-19 10:57:11.228473 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-19 10:57:11.278950 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:11.279028 | orchestrator | 2025-09-19 10:57:11.279043 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-19 10:57:11.995353 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-19 10:57:11.995442 | orchestrator | 2025-09-19 10:57:11.995455 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-19 10:57:12.085651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-19 10:57:12.085782 | orchestrator | 2025-09-19 10:57:12.085797 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-19 10:57:12.778091 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:12.778174 | orchestrator | 2025-09-19 10:57:12.778186 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-19 10:57:13.359281 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:13.359403 | orchestrator | 2025-09-19 10:57:13.359419 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-19 10:57:13.414092 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:57:13.414182 | orchestrator | 2025-09-19 10:57:13.414199 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-19 10:57:13.467931 | orchestrator | ok: [testbed-manager] 2025-09-19 10:57:13.468014 | orchestrator | 2025-09-19 10:57:13.468024 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-19 10:57:14.263149 | orchestrator | changed: [testbed-manager] 2025-09-19 10:57:14.263932 | orchestrator | 2025-09-19 10:57:14.263962 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-19 10:58:18.273490 | orchestrator | changed: [testbed-manager] 2025-09-19 10:58:18.273647 | orchestrator | 2025-09-19 10:58:18.273676 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-19 10:58:19.268686 | orchestrator | ok: [testbed-manager] 2025-09-19 10:58:19.268776 | orchestrator | 2025-09-19 10:58:19.268791 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-19 10:58:19.323068 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:58:19.323156 | orchestrator | 2025-09-19 10:58:19.323176 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-19 10:58:24.180630 | orchestrator | changed: [testbed-manager] 2025-09-19 10:58:24.180736 | orchestrator | 2025-09-19 10:58:24.180753 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-19 10:58:24.276380 | orchestrator | ok: [testbed-manager] 2025-09-19 10:58:24.276504 | orchestrator | 2025-09-19 10:58:24.276529 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-19 10:58:24.276542 | orchestrator | 2025-09-19 10:58:24.276627 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-19 10:58:24.339191 | orchestrator | skipping: [testbed-manager] 2025-09-19 10:58:24.339298 | orchestrator | 2025-09-19 10:58:24.339319 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-19 10:59:24.399371 | orchestrator | Pausing for 60 seconds 2025-09-19 10:59:24.399477 | orchestrator | changed: [testbed-manager] 2025-09-19 10:59:24.399532 | orchestrator | 2025-09-19 10:59:24.399545 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-19 10:59:28.496001 | orchestrator | changed: [testbed-manager] 2025-09-19 10:59:28.496105 | orchestrator | 2025-09-19 10:59:28.496121 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-19 11:00:10.187332 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-19 11:00:10.187391 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-19 11:00:10.187397 | orchestrator | changed: [testbed-manager] 2025-09-19 11:00:10.187402 | orchestrator | 2025-09-19 11:00:10.187407 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-19 11:00:19.512298 | orchestrator | changed: [testbed-manager] 2025-09-19 11:00:19.512393 | orchestrator | 2025-09-19 11:00:19.512416 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-19 11:00:19.608836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-19 11:00:19.608928 | orchestrator | 2025-09-19 11:00:19.608945 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-19 11:00:19.608958 | orchestrator | 2025-09-19 11:00:19.608969 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-19 11:00:19.659828 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:00:19.659917 | orchestrator | 2025-09-19 11:00:19.659932 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:00:19.659945 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-19 11:00:19.659960 | orchestrator | 2025-09-19 11:00:19.742212 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 11:00:19.742307 | orchestrator | + deactivate 2025-09-19 11:00:19.742323 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-19 11:00:19.742337 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 11:00:19.742348 | orchestrator | + export PATH 2025-09-19 11:00:19.742360 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-19 11:00:19.742372 | orchestrator | + '[' -n '' ']' 2025-09-19 11:00:19.742383 | orchestrator | + hash -r 2025-09-19 11:00:19.742394 | orchestrator | + '[' -n '' ']' 2025-09-19 11:00:19.742405 | orchestrator | + unset VIRTUAL_ENV 2025-09-19 11:00:19.742416 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-19 11:00:19.742465 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-19 11:00:19.742485 | orchestrator | + unset -f deactivate 2025-09-19 11:00:19.742503 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-19 11:00:19.751191 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 11:00:19.751253 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-19 11:00:19.751268 | orchestrator | + local max_attempts=60 2025-09-19 11:00:19.751281 | orchestrator | + local name=ceph-ansible 2025-09-19 11:00:19.751293 | orchestrator | + local attempt_num=1 2025-09-19 11:00:19.752562 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:00:19.783530 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:00:19.783600 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-19 11:00:19.783614 | orchestrator | + local max_attempts=60 2025-09-19 11:00:19.783627 | orchestrator | + local name=kolla-ansible 2025-09-19 11:00:19.783668 | orchestrator | + local attempt_num=1 2025-09-19 11:00:19.783680 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-19 11:00:19.820556 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:00:19.820639 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-19 11:00:19.820699 | orchestrator | + local max_attempts=60 2025-09-19 11:00:19.820714 | orchestrator | + local name=osism-ansible 2025-09-19 11:00:19.820725 | orchestrator | + local attempt_num=1 2025-09-19 11:00:19.820808 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-19 11:00:19.858643 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:00:19.858723 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-19 11:00:19.858741 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-19 11:00:20.534848 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-19 11:00:20.733550 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-19 11:00:20.733647 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-19 11:00:20.733662 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-19 11:00:20.733674 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-19 11:00:20.733686 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-19 11:00:20.733697 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-19 11:00:20.733708 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-19 11:00:20.733719 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-09-19 11:00:20.733730 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-19 11:00:20.733741 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-19 11:00:20.733751 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-19 11:00:20.733762 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-19 11:00:20.733773 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-19 11:00:20.733784 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-19 11:00:20.733795 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-19 11:00:20.733834 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-19 11:00:20.739979 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-19 11:00:20.772021 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-19 11:00:20.772091 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-19 11:00:20.773186 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-19 11:00:32.817103 | orchestrator | 2025-09-19 11:00:32 | INFO  | Task 2cbda399-cb2f-4079-803a-e1abe595edb7 (resolvconf) was prepared for execution. 2025-09-19 11:00:32.817212 | orchestrator | 2025-09-19 11:00:32 | INFO  | It takes a moment until task 2cbda399-cb2f-4079-803a-e1abe595edb7 (resolvconf) has been started and output is visible here. 2025-09-19 11:00:46.465264 | orchestrator | 2025-09-19 11:00:46.465343 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-19 11:00:46.465350 | orchestrator | 2025-09-19 11:00:46.465354 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 11:00:46.465359 | orchestrator | Friday 19 September 2025 11:00:36 +0000 (0:00:00.151) 0:00:00.151 ****** 2025-09-19 11:00:46.465363 | orchestrator | ok: [testbed-manager] 2025-09-19 11:00:46.465368 | orchestrator | 2025-09-19 11:00:46.465373 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-19 11:00:46.465377 | orchestrator | Friday 19 September 2025 11:00:40 +0000 (0:00:03.781) 0:00:03.933 ****** 2025-09-19 11:00:46.465381 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:00:46.465386 | orchestrator | 2025-09-19 11:00:46.465389 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-19 11:00:46.465426 | orchestrator | Friday 19 September 2025 11:00:40 +0000 (0:00:00.067) 0:00:04.000 ****** 2025-09-19 11:00:46.465430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-19 11:00:46.465435 | orchestrator | 2025-09-19 11:00:46.465439 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-19 11:00:46.465442 | orchestrator | Friday 19 September 2025 11:00:40 +0000 (0:00:00.069) 0:00:04.070 ****** 2025-09-19 11:00:46.465446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 11:00:46.465450 | orchestrator | 2025-09-19 11:00:46.465454 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-19 11:00:46.465458 | orchestrator | Friday 19 September 2025 11:00:40 +0000 (0:00:00.085) 0:00:04.156 ****** 2025-09-19 11:00:46.465462 | orchestrator | ok: [testbed-manager] 2025-09-19 11:00:46.465466 | orchestrator | 2025-09-19 11:00:46.465470 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-19 11:00:46.465474 | orchestrator | Friday 19 September 2025 11:00:41 +0000 (0:00:01.129) 0:00:05.285 ****** 2025-09-19 11:00:46.465477 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:00:46.465481 | orchestrator | 2025-09-19 11:00:46.465485 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-19 11:00:46.465489 | orchestrator | Friday 19 September 2025 11:00:41 +0000 (0:00:00.058) 0:00:05.344 ****** 2025-09-19 11:00:46.465493 | orchestrator | ok: [testbed-manager] 2025-09-19 11:00:46.465497 | orchestrator | 2025-09-19 11:00:46.465501 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-19 11:00:46.465504 | orchestrator | Friday 19 September 2025 11:00:42 +0000 (0:00:00.475) 0:00:05.820 ****** 2025-09-19 11:00:46.465508 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:00:46.465512 | orchestrator | 2025-09-19 11:00:46.465516 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-19 11:00:46.465534 | orchestrator | Friday 19 September 2025 11:00:42 +0000 (0:00:00.095) 0:00:05.916 ****** 2025-09-19 11:00:46.465538 | orchestrator | changed: [testbed-manager] 2025-09-19 11:00:46.465542 | orchestrator | 2025-09-19 11:00:46.465545 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-19 11:00:46.465549 | orchestrator | Friday 19 September 2025 11:00:43 +0000 (0:00:00.520) 0:00:06.437 ****** 2025-09-19 11:00:46.465553 | orchestrator | changed: [testbed-manager] 2025-09-19 11:00:46.465557 | orchestrator | 2025-09-19 11:00:46.465560 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-19 11:00:46.465564 | orchestrator | Friday 19 September 2025 11:00:44 +0000 (0:00:01.025) 0:00:07.463 ****** 2025-09-19 11:00:46.465568 | orchestrator | ok: [testbed-manager] 2025-09-19 11:00:46.465572 | orchestrator | 2025-09-19 11:00:46.465576 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-19 11:00:46.465586 | orchestrator | Friday 19 September 2025 11:00:45 +0000 (0:00:00.994) 0:00:08.457 ****** 2025-09-19 11:00:46.465590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-19 11:00:46.465594 | orchestrator | 2025-09-19 11:00:46.465598 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-19 11:00:46.465601 | orchestrator | Friday 19 September 2025 11:00:45 +0000 (0:00:00.092) 0:00:08.550 ****** 2025-09-19 11:00:46.465605 | orchestrator | changed: [testbed-manager] 2025-09-19 11:00:46.465609 | orchestrator | 2025-09-19 11:00:46.465613 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:00:46.465617 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:00:46.465621 | orchestrator | 2025-09-19 11:00:46.465625 | orchestrator | 2025-09-19 11:00:46.465629 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:00:46.465633 | orchestrator | Friday 19 September 2025 11:00:46 +0000 (0:00:01.114) 0:00:09.664 ****** 2025-09-19 11:00:46.465636 | orchestrator | =============================================================================== 2025-09-19 11:00:46.465640 | orchestrator | Gathering Facts --------------------------------------------------------- 3.78s 2025-09-19 11:00:46.465644 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2025-09-19 11:00:46.465648 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.11s 2025-09-19 11:00:46.465651 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.03s 2025-09-19 11:00:46.465655 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2025-09-19 11:00:46.465659 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-09-19 11:00:46.465672 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-09-19 11:00:46.465676 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.10s 2025-09-19 11:00:46.465680 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-09-19 11:00:46.465684 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-09-19 11:00:46.465688 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-09-19 11:00:46.465691 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-19 11:00:46.465695 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-09-19 11:00:46.848819 | orchestrator | + osism apply sshconfig 2025-09-19 11:00:58.805334 | orchestrator | 2025-09-19 11:00:58 | INFO  | Task fa420178-ea8e-4370-a6f5-220781b009a4 (sshconfig) was prepared for execution. 2025-09-19 11:00:58.805499 | orchestrator | 2025-09-19 11:00:58 | INFO  | It takes a moment until task fa420178-ea8e-4370-a6f5-220781b009a4 (sshconfig) has been started and output is visible here. 2025-09-19 11:01:10.300819 | orchestrator | 2025-09-19 11:01:10.300931 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-19 11:01:10.300947 | orchestrator | 2025-09-19 11:01:10.300958 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-19 11:01:10.300969 | orchestrator | Friday 19 September 2025 11:01:02 +0000 (0:00:00.161) 0:00:00.161 ****** 2025-09-19 11:01:10.300979 | orchestrator | ok: [testbed-manager] 2025-09-19 11:01:10.300990 | orchestrator | 2025-09-19 11:01:10.301000 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-19 11:01:10.301011 | orchestrator | Friday 19 September 2025 11:01:03 +0000 (0:00:00.549) 0:00:00.710 ****** 2025-09-19 11:01:10.301021 | orchestrator | changed: [testbed-manager] 2025-09-19 11:01:10.301031 | orchestrator | 2025-09-19 11:01:10.301041 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-19 11:01:10.301051 | orchestrator | Friday 19 September 2025 11:01:03 +0000 (0:00:00.503) 0:00:01.214 ****** 2025-09-19 11:01:10.301061 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-19 11:01:10.301071 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-19 11:01:10.301081 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-19 11:01:10.301090 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-19 11:01:10.301100 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-19 11:01:10.301110 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-19 11:01:10.301120 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-19 11:01:10.301130 | orchestrator | 2025-09-19 11:01:10.301140 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-19 11:01:10.301149 | orchestrator | Friday 19 September 2025 11:01:09 +0000 (0:00:05.659) 0:00:06.873 ****** 2025-09-19 11:01:10.301180 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:01:10.301191 | orchestrator | 2025-09-19 11:01:10.301200 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-19 11:01:10.301210 | orchestrator | Friday 19 September 2025 11:01:09 +0000 (0:00:00.075) 0:00:06.948 ****** 2025-09-19 11:01:10.301220 | orchestrator | changed: [testbed-manager] 2025-09-19 11:01:10.301230 | orchestrator | 2025-09-19 11:01:10.301240 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:01:10.301250 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:01:10.301261 | orchestrator | 2025-09-19 11:01:10.301275 | orchestrator | 2025-09-19 11:01:10.301291 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:01:10.301308 | orchestrator | Friday 19 September 2025 11:01:10 +0000 (0:00:00.603) 0:00:07.552 ****** 2025-09-19 11:01:10.301325 | orchestrator | =============================================================================== 2025-09-19 11:01:10.301341 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.66s 2025-09-19 11:01:10.301358 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2025-09-19 11:01:10.301406 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2025-09-19 11:01:10.301424 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.50s 2025-09-19 11:01:10.301442 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-09-19 11:01:10.552212 | orchestrator | + osism apply known-hosts 2025-09-19 11:01:22.403837 | orchestrator | 2025-09-19 11:01:22 | INFO  | Task 4079f3ef-e528-45ee-8067-936d0cbad517 (known-hosts) was prepared for execution. 2025-09-19 11:01:22.403950 | orchestrator | 2025-09-19 11:01:22 | INFO  | It takes a moment until task 4079f3ef-e528-45ee-8067-936d0cbad517 (known-hosts) has been started and output is visible here. 2025-09-19 11:01:39.112679 | orchestrator | 2025-09-19 11:01:39.112785 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-19 11:01:39.112801 | orchestrator | 2025-09-19 11:01:39.112814 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-19 11:01:39.112826 | orchestrator | Friday 19 September 2025 11:01:26 +0000 (0:00:00.168) 0:00:00.168 ****** 2025-09-19 11:01:39.112838 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-19 11:01:39.112850 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-19 11:01:39.112861 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-19 11:01:39.112872 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-19 11:01:39.112883 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-19 11:01:39.112894 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-19 11:01:39.112905 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-19 11:01:39.112916 | orchestrator | 2025-09-19 11:01:39.112927 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-19 11:01:39.112939 | orchestrator | Friday 19 September 2025 11:01:32 +0000 (0:00:06.060) 0:00:06.229 ****** 2025-09-19 11:01:39.112951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-19 11:01:39.112965 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-19 11:01:39.112976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-19 11:01:39.112987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-19 11:01:39.112997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-19 11:01:39.113008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-19 11:01:39.113019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-19 11:01:39.113030 | orchestrator | 2025-09-19 11:01:39.113041 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:39.113052 | orchestrator | Friday 19 September 2025 11:01:32 +0000 (0:00:00.170) 0:00:06.399 ****** 2025-09-19 11:01:39.113073 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB0+savVLcsi+jPsxbJ1cHdb56Br5VQiYEr3boe7XN2L25+4sL6zRELw9J+NPCEjIppJHrJerQJX0Zz1lKU6r10=) 2025-09-19 11:01:39.113090 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChj7C3EJtYo42+yDrSSEokiwCrnRVydWSO+YkOLRjYJTt6UV0dUta1w6mE3uMwSBCeZ14JhIQLv/fIw+rfhPW72xHkZu1V0vhzLuA4yb3W4YeGkVPGFehDHeGYw7GN6bR4pjuTHe8XaoInooXbvslCBZS7WMweyIQ+MZvZrlemr8iDUowedmpmRg2oF8Nl3nLcgxkpplSs7b4K1GGpkfPnDRHvf2H2SXWgXZVz/9pOgmbvH+xL1728Rwss5o1pTkL45TJyWzSPOshsw7vxfH24aC0xCcJstU/icrDlJY0h7d6YB4NavXIJOSJY3fK4h7PXIVQ6pMMp+DMbxtFVfrdwaCR7Hqht+Dd7HXClGlxI5gLsJgSpsUKtfEF7IV2wcfCGWFiJdpFlrzyKhhJvy2o7eR19DST1QxUAgNfix9LSN2nOFkvoYsO+agWNPskiFPB9N4COVaX+ELxxbSynqoAF9RazJ8bNjiGxVtezGPABwRV0dKf/cItC2weupfhF0tk=) 2025-09-19 11:01:39.113105 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAreBwIcm1/I1C2j21JV0Kte2AOTRol6gmWyiect5bN1) 2025-09-19 11:01:39.113139 | orchestrator | 2025-09-19 11:01:39.113151 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:39.113162 | orchestrator | Friday 19 September 2025 11:01:33 +0000 (0:00:01.180) 0:00:07.580 ****** 2025-09-19 11:01:39.113173 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMJ/1uasdBLndcuq34KoUNZfvndkE63+0ZmYxaDfUxbe) 2025-09-19 11:01:39.113215 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCCzV+FHr3/gqiz0HGy/RdjKHYjLevBQh6Uw6z5SJfUuKJ/umq3juq01uOw3vDKBnXLw+O3CaePYbXn/LlAulhACV9A3dEA1INuC3WwW480hbHRp9cz5YFpAss2By9F5jz4xjaJcHCK27gKJPPJ863VABz7WF7aUnJSk0mUk048+50HdyZFDPLolrACELWB2xWPJ6PAgIiWY6yME0v52ZNhxrATw1goY1FR3nEIZB763dGk/uXVi+mWL92K3n1wllp6wzX0/feikZhl+lJhBM2MxjEHbxuVC/qyK4Q+sA5R94i2s4L5j8jwO/WbJ9+e5pRp4LW44ZalCcHh/EgAtG8Yugnfan9ZggNF5/EZhfwsli6uK7a3mDDUqPBsqI9FnFCwZOLEaa4Kji7k4gG5ePXr5xg7Uqek24sHWOZE3ZJGlQjVN1DrWh0CLcoqu6UdXFHRu3V9sWdwzzKZ7watlvXWw8Tosjhz1/jJCFh/97DsaEqCnz8f2Y32riGC2W6rq/0=) 2025-09-19 11:01:39.113228 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK06s1qQX2KpCUQk9WgIBea+YJThyoq5bw1ftpcob2N93HA54+t9kRAX1yx6Se1ZbVm5js+Ij1NLPJ35whGTpts=) 2025-09-19 11:01:39.113239 | orchestrator | 2025-09-19 11:01:39.113250 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:39.113261 | orchestrator | Friday 19 September 2025 11:01:34 +0000 (0:00:01.080) 0:00:08.660 ****** 2025-09-19 11:01:39.113273 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkEWTMU5Y8u4jbu+njVTrgPn3xcUzTsSCNCZG8r/oZbxFJ9aRMPqLlT4Y/FOBwzwlXywo8YXMClBEaob1R9e6bOYJ4gwzdKISjqxUr1R6bKvOhPepc1cAM9+iyPR1e0DOmn+lGA6b6N3RDQARyViX2f1xgXBYztpKrtjvpFJaK6LTBXcF2IAVl7C5okMkz1I//y9XKWgbRwx7goaRLCI+f8Xm0VQDZqa0MwWY+RbzQ8B21/Ne2xEaOqXHTXR1dO8irnD46fsa0bbLNYUGVxqKwQ7Tp1OHjUd30oujU0yaYAW7P7lzqEg0dJsbeqg08nq6XJkPDV1HEejuANK7Cq0sDxz4MJdUMKpV0EfReaAj61ayfy6PTOonWHmk0GSZhElsqo0SsuE6h4SKzI0rWntD3nrY2OYF73kuFXDIV6X1dufSUgys5EOu3vvhtoONx7N36t7iq/rZbO2xPA3hOuxkculjaduO85qC5E2kv9Q24FIQcKBxKWyNZjzywE83Oyfk=) 2025-09-19 11:01:39.113284 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNQ5uvQwdnc4ai/+jud/jY7GUJMI1GdkxYkw+svKz8AJz0Y5CjF4NIw0DML6jqEdEx4oIkEXEsfH08g21OUYWZY=) 2025-09-19 11:01:39.113295 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC8d4bGRLG02ndkpkMIYztOY8vVhOl8qVG6sO7AHeF08) 2025-09-19 11:01:39.113306 | orchestrator | 2025-09-19 11:01:39.113317 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:39.113328 | orchestrator | Friday 19 September 2025 11:01:35 +0000 (0:00:01.078) 0:00:09.739 ****** 2025-09-19 11:01:39.113368 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMwQOw9eobYurkScK1IN+lZ83ZGgNlWMEfSDS7hp5tAMYRyNb1sBdPEaKI9Sbgxm1OIxyVc15ATh7X2jZ7tVf1g=) 2025-09-19 11:01:39.113380 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/VyQ8TkWdqczS1EiFIQwB08Cgg6/ju6c7WxKfbCrzKH97DLP40Udz4sFAHa50f7YkZpS8RlTviWcTbwpZd9FgbdSlL1RmKq+mdZ9IRP0g5w8FqQqffq/uk/61mXvlHfcKCwdsHRsdooPmnUDqDfZNMtaT8ITNccRblLgH6F2S1Sbp18jrZqyOqVmRoUGds6PuJklINjWNjrqs5477V3Je24g9DVP8Q6mOEldHTaHRi5FPNIBCZUEWGsiFwzsNr8isgVQX3y6hgG+SZ3AwC2jIGY7WdU+DDiSe3J0qigc4Jg05goxmm1u7FSgEAZwbQeoi3xp3fJEcPOVVN1+8FP6m04yaSlSGM6biMRYut+kQOfrmV0RtBbDvaUot4OVcY2J9MdFOeHZN2mFsLT22G/hqauU9pfGxfjQVHHPPdk+9S0nEzG6pWedjTEvkgvQgNyVXUy0mRLNLMWXH9Sf+c1C5HsFXpoeiPjqvNp8uVJL7Bkq3D7QGYRROi9LRu6k0AEM=) 2025-09-19 11:01:39.113391 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILCGDBcA0DDK6OIYkaRMofOMc3pzJQ3/x0awT7DT7wV0) 2025-09-19 11:01:39.113411 | orchestrator | 2025-09-19 11:01:39.113422 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:39.113433 | orchestrator | Friday 19 September 2025 11:01:36 +0000 (0:00:01.035) 0:00:10.775 ****** 2025-09-19 11:01:39.113513 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRB4u5CtpIKFahGAkMSOge29jI/3koXqOPryb8ucBouEobboLMCpMov9RbVMKBhlw9OPlCzgws+zXmRHQ6ZwjK4FOu33Zn6UHrRmCMd2cKCY1cSZXKJ52BF4cVsL6pKdmr1d83C0fjTMcgqUDJS8FxCT1jo4xfKDI+xLTrCVUhtAeRw5gNODv/Nf7qWK+RO5CKK/KFrcu2ZHRak2kqGgfO0kxvmfB2Zxx46FUDSUkE5XaDNJfnBTLjGU+jcUuBtvwjjYXyl1gKlH9baLCt9L9XQf5iMSDHyXydj0+q805yjLz6haYEFWlOKxS7iHlsfnwsjB4ZEFjG5nfHoWcrrZE3cD4XP1BFg7PlAOGsbFad+qNf/cUbnE7yVdWXt2QvdzPMGpqpi+6HTPZmroem9i/dW42IcnjkwTLyy3T847PeF+ZwL0fmoeVYYymT4knqBWDb/XICm/PyxHtLoL32i1WVaE2ucv0rj6aAL6vC1U79rVVDRoKVAzcRpn4DpMbe6Mk=) 2025-09-19 11:01:39.113526 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOGG4I8UpjXLNzEVZfgc1WSjufeQED4elvZvXYEoVCXu) 2025-09-19 11:01:39.113542 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHImaxlWdfqF+zBsIkOBJBnvOp+hf/IO0Ax6BSAlFayYULB4SLg/PxN/KC2G/KQhmIYSlWzBgBD65VO/8WcGVug=) 2025-09-19 11:01:39.113553 | orchestrator | 2025-09-19 11:01:39.113564 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:39.113575 | orchestrator | Friday 19 September 2025 11:01:38 +0000 (0:00:01.090) 0:00:11.866 ****** 2025-09-19 11:01:39.113594 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7X6qw1MJ8M0u3EsX6AtQtmAl7I48roOKN4VVmWtl67Phaalo4h+rS6nM7MXacrxVkuDi+TuXL1cmAUNR5rYcYq73b268sjFcu4O+kHOH41JP0y3oKJG+nKb6A5MKVZOCCWFGcdN4zHGQdKhTh1mtRknhFCU5mrtHRTajv2gQzUMi8AD1y2BAejq9Vglh4kXD8zAqmceU+6TwG0y0DPE7zDLyXSyuev2MADC9/6a0/q8SfwbecwvxEyzlYKE93n0/K7N0AAyplAwf5wDfvmvs8q+fSbfPaM1R8hSbqJO9TZuPbXI8mlGk61ilWJN4Dsep3amnmnpUdIT6jQj/Y2sLVZkY+pp7MEOpi0AuK3cY15T15DB8niHIX7s48YdthpRsH+Qgphy3qgw4y1eTcvr+thXWVsrC7u+78tua8zsU+sZhjxwmuX3zyHU7Wz3MXQJ96olu4OiqnAz6DoSt01upGSqzGGOtB2proOUHbBfERiqcni3XVBqY6FOu1mefKwys=) 2025-09-19 11:01:49.840666 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLhVcalne9EHN6RiS6fhg/6obwm93Hp1FIIJ8sStADIIWbnfvaQGudDfGhoVbrj/ITWXlNuqXc4E2v0gAod0bXk=) 2025-09-19 11:01:49.840816 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBuk3l3hofsjwTpGBYdiWU07WhxODebrxxjPTt7KXRod) 2025-09-19 11:01:49.840846 | orchestrator | 2025-09-19 11:01:49.840869 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:49.840891 | orchestrator | Friday 19 September 2025 11:01:39 +0000 (0:00:01.049) 0:00:12.915 ****** 2025-09-19 11:01:49.840912 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM4+vXpTf6XQzPAmFcBY1FmgWKMhP0EY9IHuNf3dJyhf) 2025-09-19 11:01:49.840936 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwXVpE9otQEdV8TWYdlx09mYMsiLaKku/v2NjiyaALGk//QsN7Q+LggvEyAkwNgY7NoQd5Vuus9VDvQGc7RKZpZQOjctDeBlnavsU137mpogovhS+rbSW8cU2Qey6AdgXkidgPol/Pzo0zOQEvzssVSxKCbXOabFtQpg8Rb4vWoKXPs1fNuQggVA01iBtIe4BkldYBD4QFZxI5iNsjBu944dVJCjMgzrBimG414HYOxK82fvrHZ+1tWg87jnAIah+QYf9d1XTGj5MIer4Ta2IiZa2aAzmAcmI0coIEsx2nYSNQwfCRGZgB90zxJhYnVct62hbKIHx+eZk1TegQnihg5OZOM1ceVYlJN5+GJkS9f0Dsdf6EFcAzy6pqLhBEDdhAdOOgd1i7ZBm+HZc10nk5o2Ui1K49YyZnbqknp4HApV27qzxPr57icX4xvkaljiYu9VTj8zt+tBbbwNdp3SIWzPdetlyj8f/ymRMvTTE+OEHririSyUmCLRLI5NVwY/k=) 2025-09-19 11:01:49.840958 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKB308mIZSJejCFPMd0CK5Q6W35IxdDv2EQSUJ5zT9DOL83a6wkaGnZsjlaYWP9F0H4IWcq6m2sTmeLysikeLJQ=) 2025-09-19 11:01:49.840977 | orchestrator | 2025-09-19 11:01:49.840996 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-19 11:01:49.841048 | orchestrator | Friday 19 September 2025 11:01:40 +0000 (0:00:01.045) 0:00:13.961 ****** 2025-09-19 11:01:49.841069 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-19 11:01:49.841088 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-19 11:01:49.841106 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-19 11:01:49.841124 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-19 11:01:49.841142 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-19 11:01:49.841159 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-19 11:01:49.841178 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-19 11:01:49.841198 | orchestrator | 2025-09-19 11:01:49.841218 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-19 11:01:49.841237 | orchestrator | Friday 19 September 2025 11:01:45 +0000 (0:00:05.278) 0:00:19.240 ****** 2025-09-19 11:01:49.841257 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-19 11:01:49.841277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-19 11:01:49.841298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-19 11:01:49.841317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-19 11:01:49.841518 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-19 11:01:49.841537 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-19 11:01:49.841548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-19 11:01:49.841559 | orchestrator | 2025-09-19 11:01:49.841571 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:49.841582 | orchestrator | Friday 19 September 2025 11:01:45 +0000 (0:00:00.186) 0:00:19.427 ****** 2025-09-19 11:01:49.841593 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAreBwIcm1/I1C2j21JV0Kte2AOTRol6gmWyiect5bN1) 2025-09-19 11:01:49.841655 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQChj7C3EJtYo42+yDrSSEokiwCrnRVydWSO+YkOLRjYJTt6UV0dUta1w6mE3uMwSBCeZ14JhIQLv/fIw+rfhPW72xHkZu1V0vhzLuA4yb3W4YeGkVPGFehDHeGYw7GN6bR4pjuTHe8XaoInooXbvslCBZS7WMweyIQ+MZvZrlemr8iDUowedmpmRg2oF8Nl3nLcgxkpplSs7b4K1GGpkfPnDRHvf2H2SXWgXZVz/9pOgmbvH+xL1728Rwss5o1pTkL45TJyWzSPOshsw7vxfH24aC0xCcJstU/icrDlJY0h7d6YB4NavXIJOSJY3fK4h7PXIVQ6pMMp+DMbxtFVfrdwaCR7Hqht+Dd7HXClGlxI5gLsJgSpsUKtfEF7IV2wcfCGWFiJdpFlrzyKhhJvy2o7eR19DST1QxUAgNfix9LSN2nOFkvoYsO+agWNPskiFPB9N4COVaX+ELxxbSynqoAF9RazJ8bNjiGxVtezGPABwRV0dKf/cItC2weupfhF0tk=) 2025-09-19 11:01:49.841670 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB0+savVLcsi+jPsxbJ1cHdb56Br5VQiYEr3boe7XN2L25+4sL6zRELw9J+NPCEjIppJHrJerQJX0Zz1lKU6r10=) 2025-09-19 11:01:49.841682 | orchestrator | 2025-09-19 11:01:49.841693 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:49.841704 | orchestrator | Friday 19 September 2025 11:01:46 +0000 (0:00:01.064) 0:00:20.492 ****** 2025-09-19 11:01:49.841729 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMJ/1uasdBLndcuq34KoUNZfvndkE63+0ZmYxaDfUxbe) 2025-09-19 11:01:49.841741 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCCzV+FHr3/gqiz0HGy/RdjKHYjLevBQh6Uw6z5SJfUuKJ/umq3juq01uOw3vDKBnXLw+O3CaePYbXn/LlAulhACV9A3dEA1INuC3WwW480hbHRp9cz5YFpAss2By9F5jz4xjaJcHCK27gKJPPJ863VABz7WF7aUnJSk0mUk048+50HdyZFDPLolrACELWB2xWPJ6PAgIiWY6yME0v52ZNhxrATw1goY1FR3nEIZB763dGk/uXVi+mWL92K3n1wllp6wzX0/feikZhl+lJhBM2MxjEHbxuVC/qyK4Q+sA5R94i2s4L5j8jwO/WbJ9+e5pRp4LW44ZalCcHh/EgAtG8Yugnfan9ZggNF5/EZhfwsli6uK7a3mDDUqPBsqI9FnFCwZOLEaa4Kji7k4gG5ePXr5xg7Uqek24sHWOZE3ZJGlQjVN1DrWh0CLcoqu6UdXFHRu3V9sWdwzzKZ7watlvXWw8Tosjhz1/jJCFh/97DsaEqCnz8f2Y32riGC2W6rq/0=) 2025-09-19 11:01:49.841752 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK06s1qQX2KpCUQk9WgIBea+YJThyoq5bw1ftpcob2N93HA54+t9kRAX1yx6Se1ZbVm5js+Ij1NLPJ35whGTpts=) 2025-09-19 11:01:49.841763 | orchestrator | 2025-09-19 11:01:49.841774 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:49.841784 | orchestrator | Friday 19 September 2025 11:01:47 +0000 (0:00:01.049) 0:00:21.541 ****** 2025-09-19 11:01:49.841795 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNQ5uvQwdnc4ai/+jud/jY7GUJMI1GdkxYkw+svKz8AJz0Y5CjF4NIw0DML6jqEdEx4oIkEXEsfH08g21OUYWZY=) 2025-09-19 11:01:49.841807 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkEWTMU5Y8u4jbu+njVTrgPn3xcUzTsSCNCZG8r/oZbxFJ9aRMPqLlT4Y/FOBwzwlXywo8YXMClBEaob1R9e6bOYJ4gwzdKISjqxUr1R6bKvOhPepc1cAM9+iyPR1e0DOmn+lGA6b6N3RDQARyViX2f1xgXBYztpKrtjvpFJaK6LTBXcF2IAVl7C5okMkz1I//y9XKWgbRwx7goaRLCI+f8Xm0VQDZqa0MwWY+RbzQ8B21/Ne2xEaOqXHTXR1dO8irnD46fsa0bbLNYUGVxqKwQ7Tp1OHjUd30oujU0yaYAW7P7lzqEg0dJsbeqg08nq6XJkPDV1HEejuANK7Cq0sDxz4MJdUMKpV0EfReaAj61ayfy6PTOonWHmk0GSZhElsqo0SsuE6h4SKzI0rWntD3nrY2OYF73kuFXDIV6X1dufSUgys5EOu3vvhtoONx7N36t7iq/rZbO2xPA3hOuxkculjaduO85qC5E2kv9Q24FIQcKBxKWyNZjzywE83Oyfk=) 2025-09-19 11:01:49.841818 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC8d4bGRLG02ndkpkMIYztOY8vVhOl8qVG6sO7AHeF08) 2025-09-19 11:01:49.841828 | orchestrator | 2025-09-19 11:01:49.841839 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:49.841850 | orchestrator | Friday 19 September 2025 11:01:48 +0000 (0:00:01.043) 0:00:22.585 ****** 2025-09-19 11:01:49.841861 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMwQOw9eobYurkScK1IN+lZ83ZGgNlWMEfSDS7hp5tAMYRyNb1sBdPEaKI9Sbgxm1OIxyVc15ATh7X2jZ7tVf1g=) 2025-09-19 11:01:49.841872 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/VyQ8TkWdqczS1EiFIQwB08Cgg6/ju6c7WxKfbCrzKH97DLP40Udz4sFAHa50f7YkZpS8RlTviWcTbwpZd9FgbdSlL1RmKq+mdZ9IRP0g5w8FqQqffq/uk/61mXvlHfcKCwdsHRsdooPmnUDqDfZNMtaT8ITNccRblLgH6F2S1Sbp18jrZqyOqVmRoUGds6PuJklINjWNjrqs5477V3Je24g9DVP8Q6mOEldHTaHRi5FPNIBCZUEWGsiFwzsNr8isgVQX3y6hgG+SZ3AwC2jIGY7WdU+DDiSe3J0qigc4Jg05goxmm1u7FSgEAZwbQeoi3xp3fJEcPOVVN1+8FP6m04yaSlSGM6biMRYut+kQOfrmV0RtBbDvaUot4OVcY2J9MdFOeHZN2mFsLT22G/hqauU9pfGxfjQVHHPPdk+9S0nEzG6pWedjTEvkgvQgNyVXUy0mRLNLMWXH9Sf+c1C5HsFXpoeiPjqvNp8uVJL7Bkq3D7QGYRROi9LRu6k0AEM=) 2025-09-19 11:01:49.841894 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILCGDBcA0DDK6OIYkaRMofOMc3pzJQ3/x0awT7DT7wV0) 2025-09-19 11:01:54.120229 | orchestrator | 2025-09-19 11:01:54.120373 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:54.120391 | orchestrator | Friday 19 September 2025 11:01:49 +0000 (0:00:01.052) 0:00:23.638 ****** 2025-09-19 11:01:54.120404 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOGG4I8UpjXLNzEVZfgc1WSjufeQED4elvZvXYEoVCXu) 2025-09-19 11:01:54.120446 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRB4u5CtpIKFahGAkMSOge29jI/3koXqOPryb8ucBouEobboLMCpMov9RbVMKBhlw9OPlCzgws+zXmRHQ6ZwjK4FOu33Zn6UHrRmCMd2cKCY1cSZXKJ52BF4cVsL6pKdmr1d83C0fjTMcgqUDJS8FxCT1jo4xfKDI+xLTrCVUhtAeRw5gNODv/Nf7qWK+RO5CKK/KFrcu2ZHRak2kqGgfO0kxvmfB2Zxx46FUDSUkE5XaDNJfnBTLjGU+jcUuBtvwjjYXyl1gKlH9baLCt9L9XQf5iMSDHyXydj0+q805yjLz6haYEFWlOKxS7iHlsfnwsjB4ZEFjG5nfHoWcrrZE3cD4XP1BFg7PlAOGsbFad+qNf/cUbnE7yVdWXt2QvdzPMGpqpi+6HTPZmroem9i/dW42IcnjkwTLyy3T847PeF+ZwL0fmoeVYYymT4knqBWDb/XICm/PyxHtLoL32i1WVaE2ucv0rj6aAL6vC1U79rVVDRoKVAzcRpn4DpMbe6Mk=) 2025-09-19 11:01:54.120462 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHImaxlWdfqF+zBsIkOBJBnvOp+hf/IO0Ax6BSAlFayYULB4SLg/PxN/KC2G/KQhmIYSlWzBgBD65VO/8WcGVug=) 2025-09-19 11:01:54.120475 | orchestrator | 2025-09-19 11:01:54.120487 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:54.120498 | orchestrator | Friday 19 September 2025 11:01:50 +0000 (0:00:01.098) 0:00:24.736 ****** 2025-09-19 11:01:54.120509 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBuk3l3hofsjwTpGBYdiWU07WhxODebrxxjPTt7KXRod) 2025-09-19 11:01:54.120520 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7X6qw1MJ8M0u3EsX6AtQtmAl7I48roOKN4VVmWtl67Phaalo4h+rS6nM7MXacrxVkuDi+TuXL1cmAUNR5rYcYq73b268sjFcu4O+kHOH41JP0y3oKJG+nKb6A5MKVZOCCWFGcdN4zHGQdKhTh1mtRknhFCU5mrtHRTajv2gQzUMi8AD1y2BAejq9Vglh4kXD8zAqmceU+6TwG0y0DPE7zDLyXSyuev2MADC9/6a0/q8SfwbecwvxEyzlYKE93n0/K7N0AAyplAwf5wDfvmvs8q+fSbfPaM1R8hSbqJO9TZuPbXI8mlGk61ilWJN4Dsep3amnmnpUdIT6jQj/Y2sLVZkY+pp7MEOpi0AuK3cY15T15DB8niHIX7s48YdthpRsH+Qgphy3qgw4y1eTcvr+thXWVsrC7u+78tua8zsU+sZhjxwmuX3zyHU7Wz3MXQJ96olu4OiqnAz6DoSt01upGSqzGGOtB2proOUHbBfERiqcni3XVBqY6FOu1mefKwys=) 2025-09-19 11:01:54.120549 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLhVcalne9EHN6RiS6fhg/6obwm93Hp1FIIJ8sStADIIWbnfvaQGudDfGhoVbrj/ITWXlNuqXc4E2v0gAod0bXk=) 2025-09-19 11:01:54.120561 | orchestrator | 2025-09-19 11:01:54.120572 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 11:01:54.120583 | orchestrator | Friday 19 September 2025 11:01:52 +0000 (0:00:01.072) 0:00:25.809 ****** 2025-09-19 11:01:54.120594 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKB308mIZSJejCFPMd0CK5Q6W35IxdDv2EQSUJ5zT9DOL83a6wkaGnZsjlaYWP9F0H4IWcq6m2sTmeLysikeLJQ=) 2025-09-19 11:01:54.120610 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwXVpE9otQEdV8TWYdlx09mYMsiLaKku/v2NjiyaALGk//QsN7Q+LggvEyAkwNgY7NoQd5Vuus9VDvQGc7RKZpZQOjctDeBlnavsU137mpogovhS+rbSW8cU2Qey6AdgXkidgPol/Pzo0zOQEvzssVSxKCbXOabFtQpg8Rb4vWoKXPs1fNuQggVA01iBtIe4BkldYBD4QFZxI5iNsjBu944dVJCjMgzrBimG414HYOxK82fvrHZ+1tWg87jnAIah+QYf9d1XTGj5MIer4Ta2IiZa2aAzmAcmI0coIEsx2nYSNQwfCRGZgB90zxJhYnVct62hbKIHx+eZk1TegQnihg5OZOM1ceVYlJN5+GJkS9f0Dsdf6EFcAzy6pqLhBEDdhAdOOgd1i7ZBm+HZc10nk5o2Ui1K49YyZnbqknp4HApV27qzxPr57icX4xvkaljiYu9VTj8zt+tBbbwNdp3SIWzPdetlyj8f/ymRMvTTE+OEHririSyUmCLRLI5NVwY/k=) 2025-09-19 11:01:54.120622 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM4+vXpTf6XQzPAmFcBY1FmgWKMhP0EY9IHuNf3dJyhf) 2025-09-19 11:01:54.120634 | orchestrator | 2025-09-19 11:01:54.120645 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-19 11:01:54.120655 | orchestrator | Friday 19 September 2025 11:01:53 +0000 (0:00:01.070) 0:00:26.879 ****** 2025-09-19 11:01:54.120667 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-19 11:01:54.120678 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 11:01:54.120688 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-19 11:01:54.120707 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-19 11:01:54.120718 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-19 11:01:54.120728 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-19 11:01:54.120739 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-19 11:01:54.120750 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:01:54.120761 | orchestrator | 2025-09-19 11:01:54.120789 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-19 11:01:54.120803 | orchestrator | Friday 19 September 2025 11:01:53 +0000 (0:00:00.161) 0:00:27.041 ****** 2025-09-19 11:01:54.120815 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:01:54.120828 | orchestrator | 2025-09-19 11:01:54.120840 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-19 11:01:54.120852 | orchestrator | Friday 19 September 2025 11:01:53 +0000 (0:00:00.070) 0:00:27.112 ****** 2025-09-19 11:01:54.120864 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:01:54.120877 | orchestrator | 2025-09-19 11:01:54.120890 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-19 11:01:54.120902 | orchestrator | Friday 19 September 2025 11:01:53 +0000 (0:00:00.063) 0:00:27.176 ****** 2025-09-19 11:01:54.120914 | orchestrator | changed: [testbed-manager] 2025-09-19 11:01:54.120927 | orchestrator | 2025-09-19 11:01:54.120939 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:01:54.120952 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:01:54.120965 | orchestrator | 2025-09-19 11:01:54.120978 | orchestrator | 2025-09-19 11:01:54.120990 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:01:54.121002 | orchestrator | Friday 19 September 2025 11:01:53 +0000 (0:00:00.498) 0:00:27.674 ****** 2025-09-19 11:01:54.121015 | orchestrator | =============================================================================== 2025-09-19 11:01:54.121028 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.06s 2025-09-19 11:01:54.121040 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.28s 2025-09-19 11:01:54.121053 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2025-09-19 11:01:54.121066 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-09-19 11:01:54.121078 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-19 11:01:54.121090 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-19 11:01:54.121103 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-19 11:01:54.121115 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-19 11:01:54.121127 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-19 11:01:54.121139 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-19 11:01:54.121150 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-19 11:01:54.121161 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-19 11:01:54.121171 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-19 11:01:54.121182 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-19 11:01:54.121193 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-19 11:01:54.121204 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-19 11:01:54.121215 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.50s 2025-09-19 11:01:54.121226 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2025-09-19 11:01:54.121244 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-09-19 11:01:54.121255 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-09-19 11:01:54.380269 | orchestrator | + osism apply squid 2025-09-19 11:02:06.297230 | orchestrator | 2025-09-19 11:02:06 | INFO  | Task 8847addc-283f-4f8b-b41a-60162c02413f (squid) was prepared for execution. 2025-09-19 11:02:06.297378 | orchestrator | 2025-09-19 11:02:06 | INFO  | It takes a moment until task 8847addc-283f-4f8b-b41a-60162c02413f (squid) has been started and output is visible here. 2025-09-19 11:03:59.435745 | orchestrator | 2025-09-19 11:03:59.435842 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-19 11:03:59.435858 | orchestrator | 2025-09-19 11:03:59.435870 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-19 11:03:59.435882 | orchestrator | Friday 19 September 2025 11:02:10 +0000 (0:00:00.190) 0:00:00.190 ****** 2025-09-19 11:03:59.435893 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 11:03:59.435905 | orchestrator | 2025-09-19 11:03:59.435916 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-19 11:03:59.435927 | orchestrator | Friday 19 September 2025 11:02:10 +0000 (0:00:00.100) 0:00:00.291 ****** 2025-09-19 11:03:59.435939 | orchestrator | ok: [testbed-manager] 2025-09-19 11:03:59.435951 | orchestrator | 2025-09-19 11:03:59.435962 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-19 11:03:59.435973 | orchestrator | Friday 19 September 2025 11:02:11 +0000 (0:00:01.412) 0:00:01.703 ****** 2025-09-19 11:03:59.435984 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-19 11:03:59.435995 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-19 11:03:59.436006 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-19 11:03:59.436017 | orchestrator | 2025-09-19 11:03:59.436028 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-19 11:03:59.436038 | orchestrator | Friday 19 September 2025 11:02:12 +0000 (0:00:01.128) 0:00:02.832 ****** 2025-09-19 11:03:59.436049 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-19 11:03:59.436060 | orchestrator | 2025-09-19 11:03:59.436071 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-19 11:03:59.436082 | orchestrator | Friday 19 September 2025 11:02:13 +0000 (0:00:01.065) 0:00:03.898 ****** 2025-09-19 11:03:59.436093 | orchestrator | ok: [testbed-manager] 2025-09-19 11:03:59.436104 | orchestrator | 2025-09-19 11:03:59.436115 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-19 11:03:59.436125 | orchestrator | Friday 19 September 2025 11:02:14 +0000 (0:00:00.367) 0:00:04.266 ****** 2025-09-19 11:03:59.436136 | orchestrator | changed: [testbed-manager] 2025-09-19 11:03:59.436147 | orchestrator | 2025-09-19 11:03:59.436158 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-19 11:03:59.436232 | orchestrator | Friday 19 September 2025 11:02:15 +0000 (0:00:00.924) 0:00:05.190 ****** 2025-09-19 11:03:59.436246 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-19 11:03:59.436257 | orchestrator | ok: [testbed-manager] 2025-09-19 11:03:59.436268 | orchestrator | 2025-09-19 11:03:59.436279 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-19 11:03:59.436291 | orchestrator | Friday 19 September 2025 11:02:46 +0000 (0:00:31.556) 0:00:36.747 ****** 2025-09-19 11:03:59.436302 | orchestrator | changed: [testbed-manager] 2025-09-19 11:03:59.436315 | orchestrator | 2025-09-19 11:03:59.436328 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-19 11:03:59.436340 | orchestrator | Friday 19 September 2025 11:02:58 +0000 (0:00:11.788) 0:00:48.536 ****** 2025-09-19 11:03:59.436352 | orchestrator | Pausing for 60 seconds 2025-09-19 11:03:59.436385 | orchestrator | changed: [testbed-manager] 2025-09-19 11:03:59.436398 | orchestrator | 2025-09-19 11:03:59.436411 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-19 11:03:59.436424 | orchestrator | Friday 19 September 2025 11:03:58 +0000 (0:01:00.081) 0:01:48.617 ****** 2025-09-19 11:03:59.436437 | orchestrator | ok: [testbed-manager] 2025-09-19 11:03:59.436449 | orchestrator | 2025-09-19 11:03:59.436461 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-19 11:03:59.436473 | orchestrator | Friday 19 September 2025 11:03:58 +0000 (0:00:00.052) 0:01:48.670 ****** 2025-09-19 11:03:59.436486 | orchestrator | changed: [testbed-manager] 2025-09-19 11:03:59.436498 | orchestrator | 2025-09-19 11:03:59.436510 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:03:59.436522 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:03:59.436534 | orchestrator | 2025-09-19 11:03:59.436546 | orchestrator | 2025-09-19 11:03:59.436559 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:03:59.436571 | orchestrator | Friday 19 September 2025 11:03:59 +0000 (0:00:00.528) 0:01:49.198 ****** 2025-09-19 11:03:59.436584 | orchestrator | =============================================================================== 2025-09-19 11:03:59.436595 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-09-19 11:03:59.436608 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.56s 2025-09-19 11:03:59.436620 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.79s 2025-09-19 11:03:59.436633 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.41s 2025-09-19 11:03:59.436645 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.13s 2025-09-19 11:03:59.436658 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2025-09-19 11:03:59.436670 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.92s 2025-09-19 11:03:59.436681 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.53s 2025-09-19 11:03:59.436692 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-09-19 11:03:59.436702 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-09-19 11:03:59.436713 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.05s 2025-09-19 11:03:59.600649 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-09-19 11:03:59.600739 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-09-19 11:03:59.605968 | orchestrator | ++ semver 9.2.0 9.0.0 2025-09-19 11:03:59.672434 | orchestrator | + [[ 1 -lt 0 ]] 2025-09-19 11:03:59.674339 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-19 11:04:11.240201 | orchestrator | 2025-09-19 11:04:11 | INFO  | Task fa4e2fa7-d82d-496c-948c-de46c60a64fd (operator) was prepared for execution. 2025-09-19 11:04:11.240302 | orchestrator | 2025-09-19 11:04:11 | INFO  | It takes a moment until task fa4e2fa7-d82d-496c-948c-de46c60a64fd (operator) has been started and output is visible here. 2025-09-19 11:04:26.770740 | orchestrator | 2025-09-19 11:04:26.770851 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-19 11:04:26.770867 | orchestrator | 2025-09-19 11:04:26.770880 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 11:04:26.770891 | orchestrator | Friday 19 September 2025 11:04:15 +0000 (0:00:00.141) 0:00:00.141 ****** 2025-09-19 11:04:26.770903 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:04:26.770915 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:04:26.770926 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:04:26.770936 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:04:26.770947 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:04:26.770979 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:04:26.770991 | orchestrator | 2025-09-19 11:04:26.771002 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-19 11:04:26.771013 | orchestrator | Friday 19 September 2025 11:04:18 +0000 (0:00:03.220) 0:00:03.362 ****** 2025-09-19 11:04:26.771024 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:04:26.771034 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:04:26.771045 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:04:26.771056 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:04:26.771066 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:04:26.771077 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:04:26.771088 | orchestrator | 2025-09-19 11:04:26.771099 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-19 11:04:26.771109 | orchestrator | 2025-09-19 11:04:26.771121 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-19 11:04:26.771158 | orchestrator | Friday 19 September 2025 11:04:19 +0000 (0:00:00.774) 0:00:04.136 ****** 2025-09-19 11:04:26.771170 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:04:26.771180 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:04:26.771191 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:04:26.771201 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:04:26.771212 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:04:26.771222 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:04:26.771233 | orchestrator | 2025-09-19 11:04:26.771244 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-19 11:04:26.771255 | orchestrator | Friday 19 September 2025 11:04:19 +0000 (0:00:00.169) 0:00:04.305 ****** 2025-09-19 11:04:26.771265 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:04:26.771276 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:04:26.771289 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:04:26.771302 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:04:26.771314 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:04:26.771326 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:04:26.771338 | orchestrator | 2025-09-19 11:04:26.771350 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-19 11:04:26.771362 | orchestrator | Friday 19 September 2025 11:04:19 +0000 (0:00:00.170) 0:00:04.476 ****** 2025-09-19 11:04:26.771374 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:04:26.771387 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:04:26.771399 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:04:26.771411 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:04:26.771423 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:04:26.771436 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:04:26.771448 | orchestrator | 2025-09-19 11:04:26.771460 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-19 11:04:26.771472 | orchestrator | Friday 19 September 2025 11:04:19 +0000 (0:00:00.601) 0:00:05.078 ****** 2025-09-19 11:04:26.771485 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:04:26.771497 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:04:26.771509 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:04:26.771522 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:04:26.771534 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:04:26.771545 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:04:26.771558 | orchestrator | 2025-09-19 11:04:26.771570 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-19 11:04:26.771583 | orchestrator | Friday 19 September 2025 11:04:20 +0000 (0:00:00.773) 0:00:05.851 ****** 2025-09-19 11:04:26.771596 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-19 11:04:26.771608 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-19 11:04:26.771620 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-19 11:04:26.771631 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-19 11:04:26.771641 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-19 11:04:26.771652 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-19 11:04:26.771671 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-19 11:04:26.771682 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-19 11:04:26.771693 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-19 11:04:26.771704 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-19 11:04:26.771715 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-19 11:04:26.771725 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-19 11:04:26.771736 | orchestrator | 2025-09-19 11:04:26.771751 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-19 11:04:26.771762 | orchestrator | Friday 19 September 2025 11:04:21 +0000 (0:00:01.208) 0:00:07.059 ****** 2025-09-19 11:04:26.771773 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:04:26.771784 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:04:26.771795 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:04:26.771805 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:04:26.771816 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:04:26.771826 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:04:26.771837 | orchestrator | 2025-09-19 11:04:26.771848 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-19 11:04:26.771860 | orchestrator | Friday 19 September 2025 11:04:23 +0000 (0:00:01.256) 0:00:08.316 ****** 2025-09-19 11:04:26.771871 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-19 11:04:26.771882 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-19 11:04:26.771893 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-19 11:04:26.771904 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 11:04:26.771929 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 11:04:26.771941 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 11:04:26.771952 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 11:04:26.771962 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 11:04:26.771973 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 11:04:26.771984 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-19 11:04:26.771994 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-19 11:04:26.772023 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-19 11:04:26.772034 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-19 11:04:26.772045 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-19 11:04:26.772055 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-19 11:04:26.772066 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-19 11:04:26.772077 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-19 11:04:26.772087 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-19 11:04:26.772098 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-19 11:04:26.772109 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-19 11:04:26.772120 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-19 11:04:26.772153 | orchestrator | 2025-09-19 11:04:26.772164 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-19 11:04:26.772176 | orchestrator | Friday 19 September 2025 11:04:24 +0000 (0:00:01.344) 0:00:09.661 ****** 2025-09-19 11:04:26.772187 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:04:26.772198 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:04:26.772209 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:04:26.772265 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:04:26.772279 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:04:26.772299 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:04:26.772310 | orchestrator | 2025-09-19 11:04:26.772322 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-19 11:04:26.772333 | orchestrator | Friday 19 September 2025 11:04:24 +0000 (0:00:00.175) 0:00:09.837 ****** 2025-09-19 11:04:26.772344 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:04:26.772356 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:04:26.772367 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:04:26.772378 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:04:26.772389 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:04:26.772400 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:04:26.772412 | orchestrator | 2025-09-19 11:04:26.772423 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-19 11:04:26.772434 | orchestrator | Friday 19 September 2025 11:04:25 +0000 (0:00:00.600) 0:00:10.437 ****** 2025-09-19 11:04:26.772445 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:04:26.772456 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:04:26.772468 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:04:26.772479 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:04:26.772490 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:04:26.772501 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:04:26.772512 | orchestrator | 2025-09-19 11:04:26.772523 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-19 11:04:26.772535 | orchestrator | Friday 19 September 2025 11:04:25 +0000 (0:00:00.202) 0:00:10.639 ****** 2025-09-19 11:04:26.772546 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-19 11:04:26.772557 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:04:26.772568 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:04:26.772579 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:04:26.772591 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:04:26.772602 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:04:26.772613 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:04:26.772624 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:04:26.772635 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:04:26.772646 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:04:26.772657 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-19 11:04:26.772668 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:04:26.772679 | orchestrator | 2025-09-19 11:04:26.772691 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-19 11:04:26.772702 | orchestrator | Friday 19 September 2025 11:04:26 +0000 (0:00:00.726) 0:00:11.366 ****** 2025-09-19 11:04:26.772713 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:04:26.772724 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:04:26.772736 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:04:26.772747 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:04:26.772758 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:04:26.772769 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:04:26.772780 | orchestrator | 2025-09-19 11:04:26.772792 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-19 11:04:26.772803 | orchestrator | Friday 19 September 2025 11:04:26 +0000 (0:00:00.159) 0:00:11.525 ****** 2025-09-19 11:04:26.772814 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:04:26.772825 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:04:26.772836 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:04:26.772847 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:04:26.772858 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:04:26.772870 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:04:26.772881 | orchestrator | 2025-09-19 11:04:26.772897 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-19 11:04:26.772909 | orchestrator | Friday 19 September 2025 11:04:26 +0000 (0:00:00.160) 0:00:11.686 ****** 2025-09-19 11:04:26.772920 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:04:26.772937 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:04:26.772949 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:04:26.772961 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:04:26.772991 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:04:27.885414 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:04:27.885515 | orchestrator | 2025-09-19 11:04:27.885532 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-19 11:04:27.885545 | orchestrator | Friday 19 September 2025 11:04:26 +0000 (0:00:00.179) 0:00:11.865 ****** 2025-09-19 11:04:27.885556 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:04:27.885567 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:04:27.885577 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:04:27.885588 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:04:27.885599 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:04:27.885609 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:04:27.885620 | orchestrator | 2025-09-19 11:04:27.885631 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-19 11:04:27.885642 | orchestrator | Friday 19 September 2025 11:04:27 +0000 (0:00:00.619) 0:00:12.485 ****** 2025-09-19 11:04:27.885653 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:04:27.885663 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:04:27.885674 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:04:27.885684 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:04:27.885695 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:04:27.885706 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:04:27.885716 | orchestrator | 2025-09-19 11:04:27.885727 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:04:27.885739 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:04:27.885752 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:04:27.885763 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:04:27.885774 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:04:27.885784 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:04:27.885795 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:04:27.885806 | orchestrator | 2025-09-19 11:04:27.885817 | orchestrator | 2025-09-19 11:04:27.885827 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:04:27.885838 | orchestrator | Friday 19 September 2025 11:04:27 +0000 (0:00:00.230) 0:00:12.716 ****** 2025-09-19 11:04:27.885850 | orchestrator | =============================================================================== 2025-09-19 11:04:27.885861 | orchestrator | Gathering Facts --------------------------------------------------------- 3.22s 2025-09-19 11:04:27.885872 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.34s 2025-09-19 11:04:27.885884 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2025-09-19 11:04:27.885895 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2025-09-19 11:04:27.885906 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2025-09-19 11:04:27.885916 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2025-09-19 11:04:27.885927 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2025-09-19 11:04:27.885968 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.62s 2025-09-19 11:04:27.885981 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-09-19 11:04:27.885993 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2025-09-19 11:04:27.886006 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-09-19 11:04:27.886083 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-09-19 11:04:27.886096 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-09-19 11:04:27.886107 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2025-09-19 11:04:27.886118 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.17s 2025-09-19 11:04:27.886154 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2025-09-19 11:04:27.886166 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-09-19 11:04:27.886177 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2025-09-19 11:04:28.183493 | orchestrator | + osism apply --environment custom facts 2025-09-19 11:04:29.965991 | orchestrator | 2025-09-19 11:04:29 | INFO  | Trying to run play facts in environment custom 2025-09-19 11:04:40.052019 | orchestrator | 2025-09-19 11:04:40 | INFO  | Task 3371f40b-1d28-4f20-be64-2d11c3b6f7d9 (facts) was prepared for execution. 2025-09-19 11:04:40.052176 | orchestrator | 2025-09-19 11:04:40 | INFO  | It takes a moment until task 3371f40b-1d28-4f20-be64-2d11c3b6f7d9 (facts) has been started and output is visible here. 2025-09-19 11:05:25.245940 | orchestrator | 2025-09-19 11:05:25.246151 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-19 11:05:25.246172 | orchestrator | 2025-09-19 11:05:25.246184 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 11:05:25.246195 | orchestrator | Friday 19 September 2025 11:04:43 +0000 (0:00:00.093) 0:00:00.093 ****** 2025-09-19 11:05:25.246207 | orchestrator | ok: [testbed-manager] 2025-09-19 11:05:25.246219 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:05:25.246231 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:05:25.246242 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:05:25.246254 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:05:25.246265 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:05:25.246275 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:05:25.246286 | orchestrator | 2025-09-19 11:05:25.246297 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-19 11:05:25.246328 | orchestrator | Friday 19 September 2025 11:04:45 +0000 (0:00:01.394) 0:00:01.488 ****** 2025-09-19 11:05:25.246339 | orchestrator | ok: [testbed-manager] 2025-09-19 11:05:25.246350 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:05:25.246361 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:05:25.246372 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:05:25.246382 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:05:25.246393 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:05:25.246404 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:05:25.246415 | orchestrator | 2025-09-19 11:05:25.246425 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-19 11:05:25.246436 | orchestrator | 2025-09-19 11:05:25.246447 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 11:05:25.246458 | orchestrator | Friday 19 September 2025 11:04:46 +0000 (0:00:01.226) 0:00:02.715 ****** 2025-09-19 11:05:25.246469 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:05:25.246481 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:05:25.246493 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:05:25.246505 | orchestrator | 2025-09-19 11:05:25.246518 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 11:05:25.246531 | orchestrator | Friday 19 September 2025 11:04:46 +0000 (0:00:00.111) 0:00:02.826 ****** 2025-09-19 11:05:25.246565 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:05:25.246577 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:05:25.246588 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:05:25.246599 | orchestrator | 2025-09-19 11:05:25.246610 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 11:05:25.246620 | orchestrator | Friday 19 September 2025 11:04:46 +0000 (0:00:00.221) 0:00:03.048 ****** 2025-09-19 11:05:25.246631 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:05:25.246642 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:05:25.246653 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:05:25.246663 | orchestrator | 2025-09-19 11:05:25.246674 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 11:05:25.246685 | orchestrator | Friday 19 September 2025 11:04:47 +0000 (0:00:00.205) 0:00:03.253 ****** 2025-09-19 11:05:25.246697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:05:25.246709 | orchestrator | 2025-09-19 11:05:25.246720 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 11:05:25.246731 | orchestrator | Friday 19 September 2025 11:04:47 +0000 (0:00:00.138) 0:00:03.391 ****** 2025-09-19 11:05:25.246741 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:05:25.246752 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:05:25.246762 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:05:25.246773 | orchestrator | 2025-09-19 11:05:25.246784 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 11:05:25.246795 | orchestrator | Friday 19 September 2025 11:04:47 +0000 (0:00:00.410) 0:00:03.802 ****** 2025-09-19 11:05:25.246806 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:05:25.246816 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:05:25.246827 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:05:25.246838 | orchestrator | 2025-09-19 11:05:25.246849 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 11:05:25.246860 | orchestrator | Friday 19 September 2025 11:04:47 +0000 (0:00:00.112) 0:00:03.914 ****** 2025-09-19 11:05:25.246871 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:05:25.246881 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:05:25.246892 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:05:25.246903 | orchestrator | 2025-09-19 11:05:25.246914 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 11:05:25.246924 | orchestrator | Friday 19 September 2025 11:04:48 +0000 (0:00:00.987) 0:00:04.901 ****** 2025-09-19 11:05:25.246935 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:05:25.246946 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:05:25.246956 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:05:25.246967 | orchestrator | 2025-09-19 11:05:25.246978 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 11:05:25.246989 | orchestrator | Friday 19 September 2025 11:04:49 +0000 (0:00:00.444) 0:00:05.345 ****** 2025-09-19 11:05:25.246999 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:05:25.247010 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:05:25.247020 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:05:25.247031 | orchestrator | 2025-09-19 11:05:25.247042 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 11:05:25.247070 | orchestrator | Friday 19 September 2025 11:04:50 +0000 (0:00:01.034) 0:00:06.380 ****** 2025-09-19 11:05:25.247082 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:05:25.247092 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:05:25.247103 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:05:25.247114 | orchestrator | 2025-09-19 11:05:25.247129 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-19 11:05:25.247140 | orchestrator | Friday 19 September 2025 11:05:07 +0000 (0:00:17.114) 0:00:23.494 ****** 2025-09-19 11:05:25.247151 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:05:25.247170 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:05:25.247181 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:05:25.247192 | orchestrator | 2025-09-19 11:05:25.247203 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-19 11:05:25.247234 | orchestrator | Friday 19 September 2025 11:05:07 +0000 (0:00:00.108) 0:00:23.603 ****** 2025-09-19 11:05:25.247246 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:05:25.247257 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:05:25.247268 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:05:25.247278 | orchestrator | 2025-09-19 11:05:25.247290 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 11:05:25.247301 | orchestrator | Friday 19 September 2025 11:05:15 +0000 (0:00:08.451) 0:00:32.055 ****** 2025-09-19 11:05:25.247311 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:05:25.247322 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:05:25.247333 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:05:25.247344 | orchestrator | 2025-09-19 11:05:25.247355 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-19 11:05:25.247366 | orchestrator | Friday 19 September 2025 11:05:16 +0000 (0:00:00.433) 0:00:32.488 ****** 2025-09-19 11:05:25.247376 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-19 11:05:25.247387 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-19 11:05:25.247398 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-19 11:05:25.247409 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-19 11:05:25.247420 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-19 11:05:25.247431 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-19 11:05:25.247442 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-19 11:05:25.247452 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-19 11:05:25.247463 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-19 11:05:25.247474 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-19 11:05:25.247484 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-19 11:05:25.247495 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-19 11:05:25.247506 | orchestrator | 2025-09-19 11:05:25.247517 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 11:05:25.247528 | orchestrator | Friday 19 September 2025 11:05:20 +0000 (0:00:03.693) 0:00:36.181 ****** 2025-09-19 11:05:25.247538 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:05:25.247549 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:05:25.247560 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:05:25.247571 | orchestrator | 2025-09-19 11:05:25.247582 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 11:05:25.247593 | orchestrator | 2025-09-19 11:05:25.247603 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 11:05:25.247614 | orchestrator | Friday 19 September 2025 11:05:21 +0000 (0:00:01.224) 0:00:37.406 ****** 2025-09-19 11:05:25.247625 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:05:25.247636 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:05:25.247646 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:05:25.247657 | orchestrator | ok: [testbed-manager] 2025-09-19 11:05:25.247668 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:05:25.247679 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:05:25.247689 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:05:25.247700 | orchestrator | 2025-09-19 11:05:25.247711 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:05:25.247722 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:05:25.247734 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:05:25.247753 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:05:25.247764 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:05:25.247775 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:05:25.247786 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:05:25.247797 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:05:25.247808 | orchestrator | 2025-09-19 11:05:25.247819 | orchestrator | 2025-09-19 11:05:25.247830 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:05:25.247841 | orchestrator | Friday 19 September 2025 11:05:25 +0000 (0:00:03.954) 0:00:41.360 ****** 2025-09-19 11:05:25.247851 | orchestrator | =============================================================================== 2025-09-19 11:05:25.247862 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.11s 2025-09-19 11:05:25.247873 | orchestrator | Install required packages (Debian) -------------------------------------- 8.45s 2025-09-19 11:05:25.247884 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.95s 2025-09-19 11:05:25.247894 | orchestrator | Copy fact files --------------------------------------------------------- 3.69s 2025-09-19 11:05:25.247905 | orchestrator | Create custom facts directory ------------------------------------------- 1.39s 2025-09-19 11:05:25.247916 | orchestrator | Copy fact file ---------------------------------------------------------- 1.23s 2025-09-19 11:05:25.247932 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.22s 2025-09-19 11:05:25.470292 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2025-09-19 11:05:25.470387 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.99s 2025-09-19 11:05:25.470401 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2025-09-19 11:05:25.470413 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-09-19 11:05:25.470424 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2025-09-19 11:05:25.470435 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2025-09-19 11:05:25.470446 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-09-19 11:05:25.470457 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-09-19 11:05:25.470468 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-09-19 11:05:25.470479 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-09-19 11:05:25.470490 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-09-19 11:05:25.758307 | orchestrator | + osism apply bootstrap 2025-09-19 11:05:37.663964 | orchestrator | 2025-09-19 11:05:37 | INFO  | Task 2e4c31d1-5046-4a9c-b5fa-3abf7078c93a (bootstrap) was prepared for execution. 2025-09-19 11:05:37.664154 | orchestrator | 2025-09-19 11:05:37 | INFO  | It takes a moment until task 2e4c31d1-5046-4a9c-b5fa-3abf7078c93a (bootstrap) has been started and output is visible here. 2025-09-19 11:05:53.823181 | orchestrator | 2025-09-19 11:05:53.823278 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-19 11:05:53.823290 | orchestrator | 2025-09-19 11:05:53.823299 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-19 11:05:53.823326 | orchestrator | Friday 19 September 2025 11:05:41 +0000 (0:00:00.178) 0:00:00.179 ****** 2025-09-19 11:05:53.823335 | orchestrator | ok: [testbed-manager] 2025-09-19 11:05:53.823344 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:05:53.823352 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:05:53.823360 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:05:53.823369 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:05:53.823377 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:05:53.823384 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:05:53.823392 | orchestrator | 2025-09-19 11:05:53.823440 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 11:05:53.823449 | orchestrator | 2025-09-19 11:05:53.823457 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 11:05:53.823465 | orchestrator | Friday 19 September 2025 11:05:42 +0000 (0:00:00.246) 0:00:00.425 ****** 2025-09-19 11:05:53.823473 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:05:53.823481 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:05:53.823489 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:05:53.823497 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:05:53.823505 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:05:53.823513 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:05:53.823520 | orchestrator | ok: [testbed-manager] 2025-09-19 11:05:53.823528 | orchestrator | 2025-09-19 11:05:53.823536 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-19 11:05:53.823544 | orchestrator | 2025-09-19 11:05:53.823552 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 11:05:53.823560 | orchestrator | Friday 19 September 2025 11:05:45 +0000 (0:00:03.798) 0:00:04.224 ****** 2025-09-19 11:05:53.823568 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-19 11:05:53.823577 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 11:05:53.823584 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-19 11:05:53.823592 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-19 11:05:53.823600 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 11:05:53.823607 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-19 11:05:53.823615 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-19 11:05:53.823623 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 11:05:53.823631 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-19 11:05:53.823638 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-19 11:05:53.823646 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-19 11:05:53.823654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 11:05:53.823661 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-19 11:05:53.823669 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-19 11:05:53.823677 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-19 11:05:53.823685 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 11:05:53.823692 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-19 11:05:53.823700 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-19 11:05:53.823708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-19 11:05:53.823716 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:05:53.823727 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-19 11:05:53.823738 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-19 11:05:53.823747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 11:05:53.823756 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 11:05:53.823765 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-19 11:05:53.823781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 11:05:53.823790 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-19 11:05:53.823799 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-19 11:05:53.823808 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:05:53.823817 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 11:05:53.823826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 11:05:53.823835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 11:05:53.823845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:05:53.823853 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:05:53.823860 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 11:05:53.823868 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-19 11:05:53.823876 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-19 11:05:53.823884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:05:53.823891 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-19 11:05:53.823899 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 11:05:53.823907 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 11:05:53.823914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:05:53.823922 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:05:53.823930 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-19 11:05:53.823937 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 11:05:53.823999 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-19 11:05:53.824052 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-19 11:05:53.824062 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 11:05:53.824069 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-19 11:05:53.824077 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:05:53.824085 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-19 11:05:53.824093 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-19 11:05:53.824101 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:05:53.824108 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-19 11:05:53.824116 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-19 11:05:53.824124 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:05:53.824132 | orchestrator | 2025-09-19 11:05:53.824140 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-19 11:05:53.824148 | orchestrator | 2025-09-19 11:05:53.824155 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-19 11:05:53.824164 | orchestrator | Friday 19 September 2025 11:05:46 +0000 (0:00:00.476) 0:00:04.701 ****** 2025-09-19 11:05:53.824171 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:05:53.824179 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:05:53.824187 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:05:53.824195 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:05:53.824203 | orchestrator | ok: [testbed-manager] 2025-09-19 11:05:53.824211 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:05:53.824218 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:05:53.824226 | orchestrator | 2025-09-19 11:05:53.824234 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-19 11:05:53.824242 | orchestrator | Friday 19 September 2025 11:05:47 +0000 (0:00:01.327) 0:00:06.028 ****** 2025-09-19 11:05:53.824250 | orchestrator | ok: [testbed-manager] 2025-09-19 11:05:53.824258 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:05:53.824266 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:05:53.824273 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:05:53.824281 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:05:53.824295 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:05:53.824303 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:05:53.824311 | orchestrator | 2025-09-19 11:05:53.824319 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-19 11:05:53.824327 | orchestrator | Friday 19 September 2025 11:05:48 +0000 (0:00:01.232) 0:00:07.261 ****** 2025-09-19 11:05:53.824336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:05:53.824346 | orchestrator | 2025-09-19 11:05:53.824354 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-19 11:05:53.824362 | orchestrator | Friday 19 September 2025 11:05:49 +0000 (0:00:00.289) 0:00:07.550 ****** 2025-09-19 11:05:53.824370 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:05:53.824378 | orchestrator | changed: [testbed-manager] 2025-09-19 11:05:53.824386 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:05:53.824394 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:05:53.824401 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:05:53.824409 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:05:53.824417 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:05:53.824425 | orchestrator | 2025-09-19 11:05:53.824432 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-19 11:05:53.824440 | orchestrator | Friday 19 September 2025 11:05:51 +0000 (0:00:02.034) 0:00:09.585 ****** 2025-09-19 11:05:53.824448 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:05:53.824461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:05:53.824472 | orchestrator | 2025-09-19 11:05:53.824480 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-19 11:05:53.824488 | orchestrator | Friday 19 September 2025 11:05:51 +0000 (0:00:00.291) 0:00:09.876 ****** 2025-09-19 11:05:53.824495 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:05:53.824503 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:05:53.824511 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:05:53.824519 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:05:53.824526 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:05:53.824534 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:05:53.824542 | orchestrator | 2025-09-19 11:05:53.824550 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-19 11:05:53.824558 | orchestrator | Friday 19 September 2025 11:05:52 +0000 (0:00:01.024) 0:00:10.901 ****** 2025-09-19 11:05:53.824565 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:05:53.824573 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:05:53.824581 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:05:53.824589 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:05:53.824596 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:05:53.824604 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:05:53.824612 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:05:53.824620 | orchestrator | 2025-09-19 11:05:53.824627 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-19 11:05:53.824635 | orchestrator | Friday 19 September 2025 11:05:53 +0000 (0:00:00.659) 0:00:11.562 ****** 2025-09-19 11:05:53.824643 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:05:53.824651 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:05:53.824659 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:05:53.824666 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:05:53.824674 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:05:53.824682 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:05:53.824690 | orchestrator | ok: [testbed-manager] 2025-09-19 11:05:53.824698 | orchestrator | 2025-09-19 11:05:53.824706 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-19 11:05:53.824719 | orchestrator | Friday 19 September 2025 11:05:53 +0000 (0:00:00.448) 0:00:12.011 ****** 2025-09-19 11:05:53.824727 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:05:53.824735 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:05:53.824747 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:06:06.348402 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:06:06.348513 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:06:06.348529 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:06:06.348541 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:06:06.348553 | orchestrator | 2025-09-19 11:06:06.348567 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-19 11:06:06.348580 | orchestrator | Friday 19 September 2025 11:05:53 +0000 (0:00:00.244) 0:00:12.255 ****** 2025-09-19 11:06:06.348593 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:06:06.348620 | orchestrator | 2025-09-19 11:06:06.348632 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-19 11:06:06.348644 | orchestrator | Friday 19 September 2025 11:05:54 +0000 (0:00:00.333) 0:00:12.589 ****** 2025-09-19 11:06:06.348655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:06:06.348667 | orchestrator | 2025-09-19 11:06:06.348678 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-19 11:06:06.348689 | orchestrator | Friday 19 September 2025 11:05:54 +0000 (0:00:00.324) 0:00:12.913 ****** 2025-09-19 11:06:06.348700 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:06.348712 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:06.348723 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:06.348734 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:06.348745 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:06.348756 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:06.348767 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:06.348778 | orchestrator | 2025-09-19 11:06:06.348789 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-19 11:06:06.348800 | orchestrator | Friday 19 September 2025 11:05:55 +0000 (0:00:01.206) 0:00:14.120 ****** 2025-09-19 11:06:06.348810 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:06:06.348821 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:06:06.348832 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:06:06.348843 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:06:06.348854 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:06:06.348864 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:06:06.348875 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:06:06.348886 | orchestrator | 2025-09-19 11:06:06.348897 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-19 11:06:06.348908 | orchestrator | Friday 19 September 2025 11:05:55 +0000 (0:00:00.227) 0:00:14.348 ****** 2025-09-19 11:06:06.348919 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:06.348930 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:06.348941 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:06.348952 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:06.348962 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:06.348973 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:06.348984 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:06.348994 | orchestrator | 2025-09-19 11:06:06.349028 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-19 11:06:06.349039 | orchestrator | Friday 19 September 2025 11:05:56 +0000 (0:00:00.580) 0:00:14.928 ****** 2025-09-19 11:06:06.349049 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:06:06.349084 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:06:06.349097 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:06:06.349108 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:06:06.349118 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:06:06.349129 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:06:06.349139 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:06:06.349150 | orchestrator | 2025-09-19 11:06:06.349161 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-19 11:06:06.349173 | orchestrator | Friday 19 September 2025 11:05:56 +0000 (0:00:00.315) 0:00:15.244 ****** 2025-09-19 11:06:06.349184 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:06.349195 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:06:06.349217 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:06:06.349228 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:06:06.349239 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:06:06.349250 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:06:06.349260 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:06:06.349271 | orchestrator | 2025-09-19 11:06:06.349282 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-19 11:06:06.349293 | orchestrator | Friday 19 September 2025 11:05:57 +0000 (0:00:00.575) 0:00:15.819 ****** 2025-09-19 11:06:06.349303 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:06.349314 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:06:06.349324 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:06:06.349335 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:06:06.349346 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:06:06.349356 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:06:06.349367 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:06:06.349378 | orchestrator | 2025-09-19 11:06:06.349388 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-19 11:06:06.349399 | orchestrator | Friday 19 September 2025 11:05:58 +0000 (0:00:01.185) 0:00:17.004 ****** 2025-09-19 11:06:06.349410 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:06.349420 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:06.349431 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:06.349442 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:06.349452 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:06.349463 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:06.349474 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:06.349484 | orchestrator | 2025-09-19 11:06:06.349495 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-19 11:06:06.349506 | orchestrator | Friday 19 September 2025 11:05:59 +0000 (0:00:01.236) 0:00:18.241 ****** 2025-09-19 11:06:06.349564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:06:06.349578 | orchestrator | 2025-09-19 11:06:06.349589 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-19 11:06:06.349600 | orchestrator | Friday 19 September 2025 11:06:00 +0000 (0:00:00.445) 0:00:18.686 ****** 2025-09-19 11:06:06.349611 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:06:06.349622 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:06:06.349633 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:06:06.349643 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:06:06.349654 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:06:06.349665 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:06:06.349675 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:06:06.349686 | orchestrator | 2025-09-19 11:06:06.349697 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 11:06:06.349708 | orchestrator | Friday 19 September 2025 11:06:01 +0000 (0:00:01.288) 0:00:19.975 ****** 2025-09-19 11:06:06.349718 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:06.349741 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:06.349752 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:06.349763 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:06.349774 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:06.349785 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:06.349795 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:06.349806 | orchestrator | 2025-09-19 11:06:06.349817 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 11:06:06.349828 | orchestrator | Friday 19 September 2025 11:06:01 +0000 (0:00:00.242) 0:00:20.217 ****** 2025-09-19 11:06:06.349838 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:06.349849 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:06.349860 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:06.349871 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:06.349881 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:06.349892 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:06.349903 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:06.349913 | orchestrator | 2025-09-19 11:06:06.349925 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 11:06:06.349936 | orchestrator | Friday 19 September 2025 11:06:02 +0000 (0:00:00.238) 0:00:20.455 ****** 2025-09-19 11:06:06.349946 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:06.349957 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:06.349968 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:06.349978 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:06.349989 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:06.350066 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:06.350081 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:06.350092 | orchestrator | 2025-09-19 11:06:06.350103 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 11:06:06.350114 | orchestrator | Friday 19 September 2025 11:06:02 +0000 (0:00:00.245) 0:00:20.701 ****** 2025-09-19 11:06:06.350126 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:06:06.350139 | orchestrator | 2025-09-19 11:06:06.350150 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 11:06:06.350161 | orchestrator | Friday 19 September 2025 11:06:02 +0000 (0:00:00.295) 0:00:20.996 ****** 2025-09-19 11:06:06.350172 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:06.350183 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:06.350193 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:06.350210 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:06.350221 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:06.350231 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:06.350242 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:06.350253 | orchestrator | 2025-09-19 11:06:06.350263 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 11:06:06.350274 | orchestrator | Friday 19 September 2025 11:06:03 +0000 (0:00:00.532) 0:00:21.528 ****** 2025-09-19 11:06:06.350285 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:06:06.350296 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:06:06.350307 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:06:06.350318 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:06:06.350328 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:06:06.350339 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:06:06.350350 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:06:06.350361 | orchestrator | 2025-09-19 11:06:06.350372 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 11:06:06.350383 | orchestrator | Friday 19 September 2025 11:06:03 +0000 (0:00:00.274) 0:00:21.803 ****** 2025-09-19 11:06:06.350394 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:06.350405 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:06:06.350415 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:06:06.350435 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:06.350446 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:06.350457 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:06:06.350468 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:06.350479 | orchestrator | 2025-09-19 11:06:06.350490 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 11:06:06.350501 | orchestrator | Friday 19 September 2025 11:06:04 +0000 (0:00:01.106) 0:00:22.910 ****** 2025-09-19 11:06:06.350511 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:06.350522 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:06.350533 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:06.350544 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:06.350554 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:06.350565 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:06.350576 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:06.350587 | orchestrator | 2025-09-19 11:06:06.350598 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 11:06:06.350608 | orchestrator | Friday 19 September 2025 11:06:05 +0000 (0:00:00.564) 0:00:23.475 ****** 2025-09-19 11:06:06.350619 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:06.350630 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:06:06.350641 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:06:06.350659 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:06:48.741078 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:48.741181 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:48.741193 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:48.741204 | orchestrator | 2025-09-19 11:06:48.741215 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 11:06:48.741227 | orchestrator | Friday 19 September 2025 11:06:06 +0000 (0:00:01.212) 0:00:24.687 ****** 2025-09-19 11:06:48.741237 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:48.741247 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:48.741257 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:48.741267 | orchestrator | changed: [testbed-manager] 2025-09-19 11:06:48.741278 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:06:48.741288 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:06:48.741297 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:06:48.741307 | orchestrator | 2025-09-19 11:06:48.741318 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-19 11:06:48.741327 | orchestrator | Friday 19 September 2025 11:06:24 +0000 (0:00:18.020) 0:00:42.707 ****** 2025-09-19 11:06:48.741337 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:48.741347 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:48.741357 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:48.741367 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:48.741376 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:48.741386 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:48.741396 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:48.741405 | orchestrator | 2025-09-19 11:06:48.741415 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-19 11:06:48.741425 | orchestrator | Friday 19 September 2025 11:06:24 +0000 (0:00:00.264) 0:00:42.972 ****** 2025-09-19 11:06:48.741435 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:48.741444 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:48.741454 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:48.741463 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:48.741473 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:48.741483 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:48.741492 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:48.741502 | orchestrator | 2025-09-19 11:06:48.741512 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-19 11:06:48.741522 | orchestrator | Friday 19 September 2025 11:06:24 +0000 (0:00:00.276) 0:00:43.248 ****** 2025-09-19 11:06:48.741531 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:48.741541 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:48.741551 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:48.741584 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:48.741596 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:48.741607 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:48.741618 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:48.741629 | orchestrator | 2025-09-19 11:06:48.741640 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-19 11:06:48.741651 | orchestrator | Friday 19 September 2025 11:06:25 +0000 (0:00:00.233) 0:00:43.482 ****** 2025-09-19 11:06:48.741663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:06:48.741675 | orchestrator | 2025-09-19 11:06:48.741686 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-19 11:06:48.741698 | orchestrator | Friday 19 September 2025 11:06:25 +0000 (0:00:00.304) 0:00:43.786 ****** 2025-09-19 11:06:48.741708 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:48.741719 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:48.741730 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:48.741741 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:48.741752 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:48.741763 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:48.741788 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:48.741799 | orchestrator | 2025-09-19 11:06:48.741811 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-19 11:06:48.741823 | orchestrator | Friday 19 September 2025 11:06:27 +0000 (0:00:01.690) 0:00:45.477 ****** 2025-09-19 11:06:48.741834 | orchestrator | changed: [testbed-manager] 2025-09-19 11:06:48.741845 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:06:48.741855 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:06:48.741866 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:06:48.741877 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:06:48.741888 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:06:48.741899 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:06:48.741910 | orchestrator | 2025-09-19 11:06:48.741921 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-19 11:06:48.741933 | orchestrator | Friday 19 September 2025 11:06:28 +0000 (0:00:01.156) 0:00:46.633 ****** 2025-09-19 11:06:48.741943 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:48.741979 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:48.741989 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:48.741999 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:48.742008 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:48.742085 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:48.742096 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:48.742106 | orchestrator | 2025-09-19 11:06:48.742115 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-19 11:06:48.742125 | orchestrator | Friday 19 September 2025 11:06:29 +0000 (0:00:00.890) 0:00:47.524 ****** 2025-09-19 11:06:48.742136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:06:48.742148 | orchestrator | 2025-09-19 11:06:48.742157 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-19 11:06:48.742168 | orchestrator | Friday 19 September 2025 11:06:29 +0000 (0:00:00.298) 0:00:47.823 ****** 2025-09-19 11:06:48.742189 | orchestrator | changed: [testbed-manager] 2025-09-19 11:06:48.742254 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:06:48.742266 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:06:48.742277 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:06:48.742286 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:06:48.742296 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:06:48.742322 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:06:48.742332 | orchestrator | 2025-09-19 11:06:48.742353 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-19 11:06:48.742363 | orchestrator | Friday 19 September 2025 11:06:30 +0000 (0:00:01.088) 0:00:48.911 ****** 2025-09-19 11:06:48.742373 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:06:48.742383 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:06:48.742393 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:06:48.742403 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:06:48.742412 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:06:48.742422 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:06:48.742432 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:06:48.742441 | orchestrator | 2025-09-19 11:06:48.742451 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-19 11:06:48.742461 | orchestrator | Friday 19 September 2025 11:06:30 +0000 (0:00:00.307) 0:00:49.219 ****** 2025-09-19 11:06:48.742471 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:06:48.742480 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:06:48.742490 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:06:48.742500 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:06:48.742509 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:06:48.742519 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:06:48.742535 | orchestrator | changed: [testbed-manager] 2025-09-19 11:06:48.742551 | orchestrator | 2025-09-19 11:06:48.742567 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-19 11:06:48.742582 | orchestrator | Friday 19 September 2025 11:06:43 +0000 (0:00:12.801) 0:01:02.021 ****** 2025-09-19 11:06:48.742596 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:48.742610 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:48.742624 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:48.742638 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:48.742651 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:48.742665 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:48.742679 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:48.742692 | orchestrator | 2025-09-19 11:06:48.742706 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-19 11:06:48.742720 | orchestrator | Friday 19 September 2025 11:06:44 +0000 (0:00:00.985) 0:01:03.006 ****** 2025-09-19 11:06:48.743069 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:48.743085 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:48.743095 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:48.743104 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:48.743113 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:48.743123 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:48.743132 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:48.743142 | orchestrator | 2025-09-19 11:06:48.743151 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-19 11:06:48.743161 | orchestrator | Friday 19 September 2025 11:06:45 +0000 (0:00:00.869) 0:01:03.876 ****** 2025-09-19 11:06:48.743171 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:48.743180 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:48.743189 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:48.743199 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:48.743208 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:48.743218 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:48.743227 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:48.743237 | orchestrator | 2025-09-19 11:06:48.743247 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-19 11:06:48.743257 | orchestrator | Friday 19 September 2025 11:06:45 +0000 (0:00:00.247) 0:01:04.124 ****** 2025-09-19 11:06:48.743266 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:48.743276 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:48.743285 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:48.743295 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:48.743304 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:48.743313 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:48.743323 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:48.743344 | orchestrator | 2025-09-19 11:06:48.743365 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-19 11:06:48.743376 | orchestrator | Friday 19 September 2025 11:06:46 +0000 (0:00:00.271) 0:01:04.395 ****** 2025-09-19 11:06:48.743386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:06:48.743397 | orchestrator | 2025-09-19 11:06:48.743407 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-19 11:06:48.743417 | orchestrator | Friday 19 September 2025 11:06:46 +0000 (0:00:00.317) 0:01:04.713 ****** 2025-09-19 11:06:48.743426 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:48.743436 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:48.743445 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:48.743454 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:48.743464 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:48.743473 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:48.743483 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:48.743492 | orchestrator | 2025-09-19 11:06:48.743502 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-19 11:06:48.743511 | orchestrator | Friday 19 September 2025 11:06:47 +0000 (0:00:01.552) 0:01:06.266 ****** 2025-09-19 11:06:48.743521 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:06:48.743531 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:06:48.743541 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:06:48.743550 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:06:48.743560 | orchestrator | changed: [testbed-manager] 2025-09-19 11:06:48.743569 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:06:48.743579 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:06:48.743588 | orchestrator | 2025-09-19 11:06:48.743598 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-19 11:06:48.743608 | orchestrator | Friday 19 September 2025 11:06:48 +0000 (0:00:00.576) 0:01:06.842 ****** 2025-09-19 11:06:48.743618 | orchestrator | ok: [testbed-manager] 2025-09-19 11:06:48.743627 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:06:48.743637 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:06:48.743646 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:06:48.743656 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:06:48.743665 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:06:48.743675 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:06:48.743684 | orchestrator | 2025-09-19 11:06:48.743706 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-19 11:09:15.712803 | orchestrator | Friday 19 September 2025 11:06:48 +0000 (0:00:00.241) 0:01:07.084 ****** 2025-09-19 11:09:15.712909 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:15.712933 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:15.712960 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:15.712981 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:15.712998 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:15.713018 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:15.713037 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:15.713056 | orchestrator | 2025-09-19 11:09:15.713069 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-19 11:09:15.713081 | orchestrator | Friday 19 September 2025 11:06:49 +0000 (0:00:01.173) 0:01:08.257 ****** 2025-09-19 11:09:15.713092 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:15.713103 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:15.713114 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:15.713125 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:15.713136 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:15.713146 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:15.713157 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:15.713168 | orchestrator | 2025-09-19 11:09:15.713179 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-19 11:09:15.713217 | orchestrator | Friday 19 September 2025 11:06:51 +0000 (0:00:01.927) 0:01:10.185 ****** 2025-09-19 11:09:15.713228 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:15.713239 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:15.713250 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:15.713260 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:15.713271 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:15.713281 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:15.713292 | orchestrator | changed: [testbed-manager] 2025-09-19 11:09:15.713303 | orchestrator | 2025-09-19 11:09:15.713315 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-19 11:09:15.713328 | orchestrator | Friday 19 September 2025 11:06:58 +0000 (0:00:06.761) 0:01:16.946 ****** 2025-09-19 11:09:15.713340 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:15.713353 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:15.713366 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:15.713378 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:15.713390 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:15.713403 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:15.713415 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:15.713427 | orchestrator | 2025-09-19 11:09:15.713440 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-19 11:09:15.713453 | orchestrator | Friday 19 September 2025 11:07:37 +0000 (0:00:38.932) 0:01:55.879 ****** 2025-09-19 11:09:15.713466 | orchestrator | changed: [testbed-manager] 2025-09-19 11:09:15.713478 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:15.713491 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:15.713503 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:15.713515 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:15.713528 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:15.713541 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:15.713553 | orchestrator | 2025-09-19 11:09:15.713566 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-19 11:09:15.713579 | orchestrator | Friday 19 September 2025 11:08:54 +0000 (0:01:17.471) 0:03:13.351 ****** 2025-09-19 11:09:15.713592 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:15.713604 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:15.713617 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:15.713630 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:15.713643 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:15.713656 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:15.713668 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:15.713679 | orchestrator | 2025-09-19 11:09:15.713690 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-19 11:09:15.713717 | orchestrator | Friday 19 September 2025 11:08:56 +0000 (0:00:01.670) 0:03:15.021 ****** 2025-09-19 11:09:15.713755 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:15.713767 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:15.713777 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:15.713788 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:15.713798 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:15.713809 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:15.713820 | orchestrator | changed: [testbed-manager] 2025-09-19 11:09:15.713831 | orchestrator | 2025-09-19 11:09:15.713842 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-19 11:09:15.713853 | orchestrator | Friday 19 September 2025 11:09:08 +0000 (0:00:12.322) 0:03:27.344 ****** 2025-09-19 11:09:15.713873 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-19 11:09:15.713896 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-19 11:09:15.713944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-19 11:09:15.713958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-19 11:09:15.713970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-19 11:09:15.713981 | orchestrator | 2025-09-19 11:09:15.713993 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-19 11:09:15.714004 | orchestrator | Friday 19 September 2025 11:09:09 +0000 (0:00:00.411) 0:03:27.755 ****** 2025-09-19 11:09:15.714015 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 11:09:15.714079 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:09:15.714090 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 11:09:15.714101 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:09:15.714112 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 11:09:15.714123 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:09:15.714133 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 11:09:15.714144 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:09:15.714155 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 11:09:15.714166 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 11:09:15.714177 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 11:09:15.714188 | orchestrator | 2025-09-19 11:09:15.714199 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-19 11:09:15.714210 | orchestrator | Friday 19 September 2025 11:09:11 +0000 (0:00:01.607) 0:03:29.362 ****** 2025-09-19 11:09:15.714221 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 11:09:15.714233 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 11:09:15.714244 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 11:09:15.714255 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 11:09:15.714271 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 11:09:15.714282 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 11:09:15.714300 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 11:09:15.714311 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 11:09:15.714325 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 11:09:15.714345 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 11:09:15.714363 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:09:15.714383 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 11:09:15.714401 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 11:09:15.714419 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 11:09:15.714438 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 11:09:15.714457 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 11:09:15.714471 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 11:09:15.714482 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 11:09:15.714493 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 11:09:15.714503 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 11:09:15.714515 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 11:09:15.714533 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 11:09:18.907167 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 11:09:18.907242 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 11:09:18.907249 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:09:18.907254 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 11:09:18.907259 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 11:09:18.907263 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 11:09:18.907267 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 11:09:18.907271 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 11:09:18.907275 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 11:09:18.907279 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 11:09:18.907283 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 11:09:18.907287 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 11:09:18.907290 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 11:09:18.907294 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:09:18.907298 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 11:09:18.907302 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 11:09:18.907305 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 11:09:18.907324 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 11:09:18.907329 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 11:09:18.907333 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 11:09:18.907336 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 11:09:18.907340 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:09:18.907344 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 11:09:18.907348 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 11:09:18.907351 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 11:09:18.907355 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 11:09:18.907359 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 11:09:18.907363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 11:09:18.907366 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 11:09:18.907370 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 11:09:18.907374 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 11:09:18.907377 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 11:09:18.907381 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 11:09:18.907385 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 11:09:18.907388 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 11:09:18.907392 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 11:09:18.907396 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 11:09:18.907412 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 11:09:18.907416 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 11:09:18.907420 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 11:09:18.907423 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 11:09:18.907428 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 11:09:18.907431 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 11:09:18.907444 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 11:09:18.907448 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 11:09:18.907452 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 11:09:18.907455 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 11:09:18.907459 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 11:09:18.907463 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 11:09:18.907467 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 11:09:18.907475 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 11:09:18.907479 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 11:09:18.907482 | orchestrator | 2025-09-19 11:09:18.907487 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-19 11:09:18.907491 | orchestrator | Friday 19 September 2025 11:09:15 +0000 (0:00:04.692) 0:03:34.055 ****** 2025-09-19 11:09:18.907495 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:09:18.907498 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:09:18.907502 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:09:18.907506 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:09:18.907510 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:09:18.907513 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:09:18.907520 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 11:09:18.907524 | orchestrator | 2025-09-19 11:09:18.907528 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-19 11:09:18.907532 | orchestrator | Friday 19 September 2025 11:09:17 +0000 (0:00:01.588) 0:03:35.643 ****** 2025-09-19 11:09:18.907536 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 11:09:18.907539 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 11:09:18.907543 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:09:18.907547 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:09:18.907551 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 11:09:18.907555 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:09:18.907558 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 11:09:18.907562 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:09:18.907569 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 11:09:18.907572 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 11:09:18.907576 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 11:09:18.907580 | orchestrator | 2025-09-19 11:09:18.907584 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-19 11:09:18.907588 | orchestrator | Friday 19 September 2025 11:09:17 +0000 (0:00:00.595) 0:03:36.239 ****** 2025-09-19 11:09:18.907591 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 11:09:18.907595 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:09:18.907599 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 11:09:18.907603 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 11:09:18.907606 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:09:18.907610 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:09:18.907614 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 11:09:18.907618 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:09:18.907621 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 11:09:18.907625 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 11:09:18.907634 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 11:09:18.907638 | orchestrator | 2025-09-19 11:09:18.907641 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-19 11:09:18.907645 | orchestrator | Friday 19 September 2025 11:09:18 +0000 (0:00:00.683) 0:03:36.922 ****** 2025-09-19 11:09:18.907649 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:09:18.907653 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:09:18.907657 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:09:18.907660 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:09:18.907664 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:09:18.907670 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:09:31.070861 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:09:31.071004 | orchestrator | 2025-09-19 11:09:31.071033 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-19 11:09:31.071056 | orchestrator | Friday 19 September 2025 11:09:18 +0000 (0:00:00.332) 0:03:37.255 ****** 2025-09-19 11:09:31.071074 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:31.071093 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:31.071111 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:31.071130 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:31.071150 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:31.071168 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:31.071184 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:31.071195 | orchestrator | 2025-09-19 11:09:31.071207 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-19 11:09:31.071218 | orchestrator | Friday 19 September 2025 11:09:24 +0000 (0:00:05.761) 0:03:43.016 ****** 2025-09-19 11:09:31.071230 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-19 11:09:31.071241 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-19 11:09:31.071252 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:09:31.071263 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-19 11:09:31.071274 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:09:31.071285 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:09:31.071296 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-19 11:09:31.071310 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-19 11:09:31.071323 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:09:31.071336 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-19 11:09:31.071348 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:09:31.071361 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:09:31.071374 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-19 11:09:31.071387 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:09:31.071400 | orchestrator | 2025-09-19 11:09:31.071414 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-19 11:09:31.071427 | orchestrator | Friday 19 September 2025 11:09:24 +0000 (0:00:00.332) 0:03:43.349 ****** 2025-09-19 11:09:31.071440 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-19 11:09:31.071453 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-19 11:09:31.071466 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-19 11:09:31.071479 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-19 11:09:31.071492 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-19 11:09:31.071504 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-19 11:09:31.071515 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-19 11:09:31.071525 | orchestrator | 2025-09-19 11:09:31.071536 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-19 11:09:31.071547 | orchestrator | Friday 19 September 2025 11:09:26 +0000 (0:00:01.055) 0:03:44.405 ****** 2025-09-19 11:09:31.071560 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:09:31.071600 | orchestrator | 2025-09-19 11:09:31.071612 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-19 11:09:31.071623 | orchestrator | Friday 19 September 2025 11:09:26 +0000 (0:00:00.449) 0:03:44.854 ****** 2025-09-19 11:09:31.071634 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:31.071645 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:31.071669 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:31.071681 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:31.071691 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:31.071739 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:31.071758 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:31.071776 | orchestrator | 2025-09-19 11:09:31.071795 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-19 11:09:31.071809 | orchestrator | Friday 19 September 2025 11:09:28 +0000 (0:00:01.638) 0:03:46.492 ****** 2025-09-19 11:09:31.071820 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:31.071831 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:31.071842 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:31.071852 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:31.071863 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:31.071874 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:31.071884 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:31.071895 | orchestrator | 2025-09-19 11:09:31.071906 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-19 11:09:31.071917 | orchestrator | Friday 19 September 2025 11:09:28 +0000 (0:00:00.643) 0:03:47.136 ****** 2025-09-19 11:09:31.071927 | orchestrator | changed: [testbed-manager] 2025-09-19 11:09:31.071939 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:31.071950 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:31.071960 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:31.071971 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:31.071982 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:31.071993 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:31.072004 | orchestrator | 2025-09-19 11:09:31.072015 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-19 11:09:31.072026 | orchestrator | Friday 19 September 2025 11:09:29 +0000 (0:00:00.747) 0:03:47.883 ****** 2025-09-19 11:09:31.072036 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:31.072047 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:31.072058 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:31.072070 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:31.072080 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:31.072091 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:31.072102 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:31.072112 | orchestrator | 2025-09-19 11:09:31.072123 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-19 11:09:31.072134 | orchestrator | Friday 19 September 2025 11:09:30 +0000 (0:00:00.643) 0:03:48.526 ****** 2025-09-19 11:09:31.072170 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278824.2157679, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:31.072186 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278860.2720113, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:31.072229 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278851.8116791, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:31.072253 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278869.8919845, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:31.072273 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278852.7569604, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:31.072291 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278868.1224692, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:31.072309 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758278850.3687658, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:31.072352 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:57.100317 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:57.100456 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:57.100473 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:57.100505 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:57.100518 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:57.100530 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:09:57.100541 | orchestrator | 2025-09-19 11:09:57.100555 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-19 11:09:57.100568 | orchestrator | Friday 19 September 2025 11:09:31 +0000 (0:00:00.879) 0:03:49.406 ****** 2025-09-19 11:09:57.100580 | orchestrator | changed: [testbed-manager] 2025-09-19 11:09:57.100592 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:57.100602 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:57.100613 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:57.100624 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:57.100634 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:57.100645 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:57.100656 | orchestrator | 2025-09-19 11:09:57.100726 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-19 11:09:57.100738 | orchestrator | Friday 19 September 2025 11:09:32 +0000 (0:00:01.054) 0:03:50.460 ****** 2025-09-19 11:09:57.100758 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:57.100769 | orchestrator | changed: [testbed-manager] 2025-09-19 11:09:57.100779 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:57.100790 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:57.100818 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:57.100829 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:57.100840 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:57.100853 | orchestrator | 2025-09-19 11:09:57.100865 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-19 11:09:57.100878 | orchestrator | Friday 19 September 2025 11:09:33 +0000 (0:00:01.123) 0:03:51.584 ****** 2025-09-19 11:09:57.100890 | orchestrator | changed: [testbed-manager] 2025-09-19 11:09:57.100902 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:57.100914 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:57.100927 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:57.100940 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:57.100952 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:57.100964 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:57.100976 | orchestrator | 2025-09-19 11:09:57.100988 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-19 11:09:57.101000 | orchestrator | Friday 19 September 2025 11:09:35 +0000 (0:00:01.972) 0:03:53.557 ****** 2025-09-19 11:09:57.101012 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:09:57.101024 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:09:57.101036 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:09:57.101048 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:09:57.101061 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:09:57.101074 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:09:57.101085 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:09:57.101096 | orchestrator | 2025-09-19 11:09:57.101106 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-19 11:09:57.101117 | orchestrator | Friday 19 September 2025 11:09:35 +0000 (0:00:00.334) 0:03:53.891 ****** 2025-09-19 11:09:57.101128 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:57.101140 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:57.101150 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:57.101161 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:57.101172 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:57.101183 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:57.101193 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:57.101204 | orchestrator | 2025-09-19 11:09:57.101214 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-19 11:09:57.101225 | orchestrator | Friday 19 September 2025 11:09:36 +0000 (0:00:00.717) 0:03:54.608 ****** 2025-09-19 11:09:57.101238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:09:57.101250 | orchestrator | 2025-09-19 11:09:57.101261 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-19 11:09:57.101272 | orchestrator | Friday 19 September 2025 11:09:36 +0000 (0:00:00.391) 0:03:54.999 ****** 2025-09-19 11:09:57.101283 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:57.101294 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:57.101305 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:57.101315 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:57.101326 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:57.101336 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:57.101348 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:57.101358 | orchestrator | 2025-09-19 11:09:57.101375 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-19 11:09:57.101386 | orchestrator | Friday 19 September 2025 11:09:44 +0000 (0:00:07.653) 0:04:02.653 ****** 2025-09-19 11:09:57.101404 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:57.101415 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:57.101426 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:57.101436 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:57.101447 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:57.101457 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:57.101468 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:57.101478 | orchestrator | 2025-09-19 11:09:57.101489 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-19 11:09:57.101500 | orchestrator | Friday 19 September 2025 11:09:45 +0000 (0:00:01.254) 0:04:03.907 ****** 2025-09-19 11:09:57.101510 | orchestrator | ok: [testbed-manager] 2025-09-19 11:09:57.101521 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:09:57.101531 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:09:57.101542 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:09:57.101552 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:09:57.101563 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:09:57.101573 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:09:57.101583 | orchestrator | 2025-09-19 11:09:57.101594 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-19 11:09:57.101605 | orchestrator | Friday 19 September 2025 11:09:46 +0000 (0:00:01.018) 0:04:04.926 ****** 2025-09-19 11:09:57.101616 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:09:57.101627 | orchestrator | 2025-09-19 11:09:57.101644 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-19 11:09:57.101683 | orchestrator | Friday 19 September 2025 11:09:47 +0000 (0:00:00.501) 0:04:05.428 ****** 2025-09-19 11:09:57.101703 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:09:57.101722 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:09:57.101740 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:09:57.101760 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:57.101775 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:57.101798 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:09:57.101823 | orchestrator | changed: [testbed-manager] 2025-09-19 11:09:57.101839 | orchestrator | 2025-09-19 11:09:57.101857 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-19 11:09:57.101874 | orchestrator | Friday 19 September 2025 11:09:56 +0000 (0:00:09.372) 0:04:14.800 ****** 2025-09-19 11:09:57.101891 | orchestrator | changed: [testbed-manager] 2025-09-19 11:09:57.101907 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:09:57.101923 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:09:57.101953 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:07.636417 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:07.636522 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:07.636536 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:07.636548 | orchestrator | 2025-09-19 11:11:07.636591 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-19 11:11:07.636605 | orchestrator | Friday 19 September 2025 11:09:57 +0000 (0:00:00.642) 0:04:15.443 ****** 2025-09-19 11:11:07.636616 | orchestrator | changed: [testbed-manager] 2025-09-19 11:11:07.636628 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:07.636639 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:07.636650 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:07.636661 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:07.636672 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:07.636683 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:07.636694 | orchestrator | 2025-09-19 11:11:07.636705 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-19 11:11:07.636716 | orchestrator | Friday 19 September 2025 11:09:58 +0000 (0:00:01.160) 0:04:16.604 ****** 2025-09-19 11:11:07.636728 | orchestrator | changed: [testbed-manager] 2025-09-19 11:11:07.636739 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:07.636770 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:07.636781 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:07.636792 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:07.636803 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:07.636814 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:07.636824 | orchestrator | 2025-09-19 11:11:07.636835 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-19 11:11:07.636847 | orchestrator | Friday 19 September 2025 11:09:59 +0000 (0:00:01.055) 0:04:17.659 ****** 2025-09-19 11:11:07.636858 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:07.636870 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:07.636881 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:07.636891 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:07.636902 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:07.636913 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:07.636924 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:07.636934 | orchestrator | 2025-09-19 11:11:07.636946 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-19 11:11:07.636958 | orchestrator | Friday 19 September 2025 11:09:59 +0000 (0:00:00.302) 0:04:17.962 ****** 2025-09-19 11:11:07.636971 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:07.636983 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:07.636995 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:07.637007 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:07.637019 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:07.637031 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:07.637044 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:07.637057 | orchestrator | 2025-09-19 11:11:07.637069 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-19 11:11:07.637082 | orchestrator | Friday 19 September 2025 11:09:59 +0000 (0:00:00.340) 0:04:18.302 ****** 2025-09-19 11:11:07.637095 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:07.637107 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:07.637119 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:07.637131 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:07.637143 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:07.637154 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:07.637166 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:07.637179 | orchestrator | 2025-09-19 11:11:07.637200 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-19 11:11:07.637212 | orchestrator | Friday 19 September 2025 11:10:00 +0000 (0:00:00.293) 0:04:18.596 ****** 2025-09-19 11:11:07.637223 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:07.637234 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:07.637245 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:07.637255 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:07.637266 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:07.637277 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:07.637372 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:07.637384 | orchestrator | 2025-09-19 11:11:07.637395 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-19 11:11:07.637406 | orchestrator | Friday 19 September 2025 11:10:06 +0000 (0:00:05.997) 0:04:24.593 ****** 2025-09-19 11:11:07.637420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:11:07.637433 | orchestrator | 2025-09-19 11:11:07.637444 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-19 11:11:07.637455 | orchestrator | Friday 19 September 2025 11:10:06 +0000 (0:00:00.361) 0:04:24.955 ****** 2025-09-19 11:11:07.637466 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-19 11:11:07.637477 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-19 11:11:07.637488 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-19 11:11:07.637511 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:07.637523 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-19 11:11:07.637534 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-19 11:11:07.637545 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-19 11:11:07.637573 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:07.637585 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-19 11:11:07.637596 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:07.637606 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-19 11:11:07.637617 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-19 11:11:07.637628 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-19 11:11:07.637639 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:07.637650 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:11:07.637661 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-19 11:11:07.637672 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-19 11:11:07.637700 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:11:07.637711 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-19 11:11:07.637722 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-19 11:11:07.637733 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:11:07.637744 | orchestrator | 2025-09-19 11:11:07.637755 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-19 11:11:07.637766 | orchestrator | Friday 19 September 2025 11:10:06 +0000 (0:00:00.308) 0:04:25.263 ****** 2025-09-19 11:11:07.637777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:11:07.637789 | orchestrator | 2025-09-19 11:11:07.637800 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-19 11:11:07.637811 | orchestrator | Friday 19 September 2025 11:10:07 +0000 (0:00:00.359) 0:04:25.623 ****** 2025-09-19 11:11:07.637822 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-19 11:11:07.637833 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:07.637844 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-19 11:11:07.637855 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-19 11:11:07.637865 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:07.637877 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-19 11:11:07.637887 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:07.637898 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:07.637909 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-19 11:11:07.637920 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-19 11:11:07.637931 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:11:07.637942 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:11:07.637953 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-19 11:11:07.637964 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:11:07.637980 | orchestrator | 2025-09-19 11:11:07.637999 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-19 11:11:07.638107 | orchestrator | Friday 19 September 2025 11:10:07 +0000 (0:00:00.281) 0:04:25.904 ****** 2025-09-19 11:11:07.638127 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:11:07.638139 | orchestrator | 2025-09-19 11:11:07.638150 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-19 11:11:07.638170 | orchestrator | Friday 19 September 2025 11:10:07 +0000 (0:00:00.437) 0:04:26.341 ****** 2025-09-19 11:11:07.638181 | orchestrator | changed: [testbed-manager] 2025-09-19 11:11:07.638192 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:07.638202 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:07.638220 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:07.638231 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:07.638242 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:07.638252 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:07.638263 | orchestrator | 2025-09-19 11:11:07.638274 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-19 11:11:07.638284 | orchestrator | Friday 19 September 2025 11:10:43 +0000 (0:00:35.021) 0:05:01.362 ****** 2025-09-19 11:11:07.638295 | orchestrator | changed: [testbed-manager] 2025-09-19 11:11:07.638306 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:07.638316 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:07.638327 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:07.638337 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:07.638348 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:07.638358 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:07.638369 | orchestrator | 2025-09-19 11:11:07.638380 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-19 11:11:07.638390 | orchestrator | Friday 19 September 2025 11:10:51 +0000 (0:00:08.386) 0:05:09.748 ****** 2025-09-19 11:11:07.638401 | orchestrator | changed: [testbed-manager] 2025-09-19 11:11:07.638411 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:07.638422 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:07.638432 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:07.638443 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:07.638453 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:07.638464 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:07.638475 | orchestrator | 2025-09-19 11:11:07.638485 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-19 11:11:07.638496 | orchestrator | Friday 19 September 2025 11:10:59 +0000 (0:00:08.151) 0:05:17.900 ****** 2025-09-19 11:11:07.638507 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:07.638517 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:07.638528 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:07.638538 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:07.638549 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:07.638595 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:07.638606 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:07.638616 | orchestrator | 2025-09-19 11:11:07.638627 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-19 11:11:07.638638 | orchestrator | Friday 19 September 2025 11:11:01 +0000 (0:00:01.765) 0:05:19.665 ****** 2025-09-19 11:11:07.638649 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:07.638660 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:07.638671 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:07.638682 | orchestrator | changed: [testbed-manager] 2025-09-19 11:11:07.638693 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:07.638703 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:07.638714 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:07.638725 | orchestrator | 2025-09-19 11:11:07.638736 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-19 11:11:07.638757 | orchestrator | Friday 19 September 2025 11:11:07 +0000 (0:00:06.301) 0:05:25.967 ****** 2025-09-19 11:11:19.822426 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:11:19.822533 | orchestrator | 2025-09-19 11:11:19.822600 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-19 11:11:19.822643 | orchestrator | Friday 19 September 2025 11:11:08 +0000 (0:00:00.418) 0:05:26.386 ****** 2025-09-19 11:11:19.822656 | orchestrator | changed: [testbed-manager] 2025-09-19 11:11:19.822668 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:19.822679 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:19.822690 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:19.822700 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:19.822711 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:19.822722 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:19.822733 | orchestrator | 2025-09-19 11:11:19.822744 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-19 11:11:19.822755 | orchestrator | Friday 19 September 2025 11:11:08 +0000 (0:00:00.783) 0:05:27.170 ****** 2025-09-19 11:11:19.822766 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:19.822778 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:19.822789 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:19.822799 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:19.822810 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:19.822820 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:19.822831 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:19.822841 | orchestrator | 2025-09-19 11:11:19.822852 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-19 11:11:19.822863 | orchestrator | Friday 19 September 2025 11:11:10 +0000 (0:00:01.938) 0:05:29.109 ****** 2025-09-19 11:11:19.822873 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:11:19.822884 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:11:19.822895 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:11:19.822906 | orchestrator | changed: [testbed-manager] 2025-09-19 11:11:19.822916 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:11:19.822927 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:11:19.822937 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:11:19.822950 | orchestrator | 2025-09-19 11:11:19.822962 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-19 11:11:19.822973 | orchestrator | Friday 19 September 2025 11:11:11 +0000 (0:00:00.806) 0:05:29.915 ****** 2025-09-19 11:11:19.822985 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:19.822997 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:19.823009 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:19.823020 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:19.823032 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:11:19.823044 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:11:19.823056 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:11:19.823068 | orchestrator | 2025-09-19 11:11:19.823080 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-19 11:11:19.823092 | orchestrator | Friday 19 September 2025 11:11:11 +0000 (0:00:00.315) 0:05:30.231 ****** 2025-09-19 11:11:19.823104 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:19.823116 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:19.823128 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:19.823140 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:19.823151 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:11:19.823163 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:11:19.823175 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:11:19.823186 | orchestrator | 2025-09-19 11:11:19.823198 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-19 11:11:19.823210 | orchestrator | Friday 19 September 2025 11:11:12 +0000 (0:00:00.435) 0:05:30.666 ****** 2025-09-19 11:11:19.823222 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:19.823234 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:19.823246 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:19.823258 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:19.823270 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:19.823283 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:19.823295 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:19.823316 | orchestrator | 2025-09-19 11:11:19.823346 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-19 11:11:19.823358 | orchestrator | Friday 19 September 2025 11:11:12 +0000 (0:00:00.322) 0:05:30.989 ****** 2025-09-19 11:11:19.823368 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:19.823379 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:19.823390 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:19.823400 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:19.823411 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:11:19.823422 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:11:19.823432 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:11:19.823443 | orchestrator | 2025-09-19 11:11:19.823454 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-19 11:11:19.823466 | orchestrator | Friday 19 September 2025 11:11:12 +0000 (0:00:00.316) 0:05:31.306 ****** 2025-09-19 11:11:19.823477 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:19.823488 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:19.823498 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:19.823509 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:19.823519 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:19.823530 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:19.823559 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:19.823570 | orchestrator | 2025-09-19 11:11:19.823581 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-19 11:11:19.823592 | orchestrator | Friday 19 September 2025 11:11:13 +0000 (0:00:00.335) 0:05:31.641 ****** 2025-09-19 11:11:19.823603 | orchestrator | ok: [testbed-manager] =>  2025-09-19 11:11:19.823613 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:11:19.823624 | orchestrator | ok: [testbed-node-0] =>  2025-09-19 11:11:19.823635 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:11:19.823645 | orchestrator | ok: [testbed-node-1] =>  2025-09-19 11:11:19.823656 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:11:19.823667 | orchestrator | ok: [testbed-node-2] =>  2025-09-19 11:11:19.823677 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:11:19.823688 | orchestrator | ok: [testbed-node-3] =>  2025-09-19 11:11:19.823699 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:11:19.823726 | orchestrator | ok: [testbed-node-4] =>  2025-09-19 11:11:19.823738 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:11:19.823748 | orchestrator | ok: [testbed-node-5] =>  2025-09-19 11:11:19.823759 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 11:11:19.823770 | orchestrator | 2025-09-19 11:11:19.823780 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-19 11:11:19.823791 | orchestrator | Friday 19 September 2025 11:11:13 +0000 (0:00:00.301) 0:05:31.943 ****** 2025-09-19 11:11:19.823802 | orchestrator | ok: [testbed-manager] =>  2025-09-19 11:11:19.823812 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:11:19.823823 | orchestrator | ok: [testbed-node-0] =>  2025-09-19 11:11:19.823834 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:11:19.823845 | orchestrator | ok: [testbed-node-1] =>  2025-09-19 11:11:19.823855 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:11:19.823866 | orchestrator | ok: [testbed-node-2] =>  2025-09-19 11:11:19.823876 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:11:19.823887 | orchestrator | ok: [testbed-node-3] =>  2025-09-19 11:11:19.823897 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:11:19.823908 | orchestrator | ok: [testbed-node-4] =>  2025-09-19 11:11:19.823919 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:11:19.823929 | orchestrator | ok: [testbed-node-5] =>  2025-09-19 11:11:19.823940 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 11:11:19.823951 | orchestrator | 2025-09-19 11:11:19.823961 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-19 11:11:19.823972 | orchestrator | Friday 19 September 2025 11:11:14 +0000 (0:00:00.516) 0:05:32.459 ****** 2025-09-19 11:11:19.823983 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:19.824002 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:19.824012 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:19.824023 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:19.824034 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:11:19.824045 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:11:19.824055 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:11:19.824066 | orchestrator | 2025-09-19 11:11:19.824077 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-19 11:11:19.824088 | orchestrator | Friday 19 September 2025 11:11:14 +0000 (0:00:00.313) 0:05:32.773 ****** 2025-09-19 11:11:19.824098 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:19.824109 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:19.824120 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:19.824130 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:19.824141 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:11:19.824151 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:11:19.824162 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:11:19.824172 | orchestrator | 2025-09-19 11:11:19.824183 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-19 11:11:19.824194 | orchestrator | Friday 19 September 2025 11:11:14 +0000 (0:00:00.299) 0:05:33.072 ****** 2025-09-19 11:11:19.824207 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:11:19.824220 | orchestrator | 2025-09-19 11:11:19.824236 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-19 11:11:19.824247 | orchestrator | Friday 19 September 2025 11:11:15 +0000 (0:00:00.461) 0:05:33.533 ****** 2025-09-19 11:11:19.824258 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:19.824268 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:19.824279 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:19.824290 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:19.824300 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:19.824311 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:19.824322 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:19.824332 | orchestrator | 2025-09-19 11:11:19.824343 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-19 11:11:19.824354 | orchestrator | Friday 19 September 2025 11:11:16 +0000 (0:00:00.872) 0:05:34.406 ****** 2025-09-19 11:11:19.824364 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:11:19.824375 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:11:19.824386 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:11:19.824396 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:11:19.824407 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:11:19.824417 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:11:19.824428 | orchestrator | ok: [testbed-manager] 2025-09-19 11:11:19.824439 | orchestrator | 2025-09-19 11:11:19.824450 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-19 11:11:19.824462 | orchestrator | Friday 19 September 2025 11:11:19 +0000 (0:00:03.046) 0:05:37.453 ****** 2025-09-19 11:11:19.824473 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-19 11:11:19.824484 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-19 11:11:19.824495 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-19 11:11:19.824505 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-19 11:11:19.824516 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-19 11:11:19.824526 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-19 11:11:19.824581 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:11:19.824593 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-19 11:11:19.824604 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-19 11:11:19.824614 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-19 11:11:19.824633 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:11:19.824644 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-19 11:11:19.824654 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-19 11:11:19.824665 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-19 11:11:19.824675 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:11:19.824686 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:11:19.824697 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-19 11:11:19.824708 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-19 11:11:19.824725 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-19 11:12:23.215785 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-19 11:12:23.215896 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-19 11:12:23.215911 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-19 11:12:23.215923 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:12:23.215934 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:12:23.215945 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-19 11:12:23.215956 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-19 11:12:23.215968 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-19 11:12:23.215979 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:12:23.215990 | orchestrator | 2025-09-19 11:12:23.216002 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-19 11:12:23.216015 | orchestrator | Friday 19 September 2025 11:11:19 +0000 (0:00:00.888) 0:05:38.341 ****** 2025-09-19 11:12:23.216026 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:23.216037 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:23.216047 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:23.216058 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:23.216068 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:23.216079 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:23.216089 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:23.216100 | orchestrator | 2025-09-19 11:12:23.216111 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-19 11:12:23.216122 | orchestrator | Friday 19 September 2025 11:11:26 +0000 (0:00:06.837) 0:05:45.179 ****** 2025-09-19 11:12:23.216132 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:23.216143 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:23.216153 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:23.216164 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:23.216175 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:23.216185 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:23.216196 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:23.216207 | orchestrator | 2025-09-19 11:12:23.216218 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-19 11:12:23.216229 | orchestrator | Friday 19 September 2025 11:11:27 +0000 (0:00:01.097) 0:05:46.277 ****** 2025-09-19 11:12:23.216240 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:23.216250 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:23.216260 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:23.216271 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:23.216281 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:23.216300 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:23.216319 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:23.216338 | orchestrator | 2025-09-19 11:12:23.216358 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-19 11:12:23.216376 | orchestrator | Friday 19 September 2025 11:11:35 +0000 (0:00:07.816) 0:05:54.094 ****** 2025-09-19 11:12:23.216394 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:23.216413 | orchestrator | changed: [testbed-manager] 2025-09-19 11:12:23.216431 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:23.216531 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:23.216556 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:23.216597 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:23.216620 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:23.216639 | orchestrator | 2025-09-19 11:12:23.216658 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-19 11:12:23.216675 | orchestrator | Friday 19 September 2025 11:11:39 +0000 (0:00:03.442) 0:05:57.536 ****** 2025-09-19 11:12:23.216693 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:23.216711 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:23.216731 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:23.216750 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:23.216768 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:23.216788 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:23.216806 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:23.216825 | orchestrator | 2025-09-19 11:12:23.216844 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-19 11:12:23.216862 | orchestrator | Friday 19 September 2025 11:11:40 +0000 (0:00:01.584) 0:05:59.120 ****** 2025-09-19 11:12:23.216881 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:23.216901 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:23.216920 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:23.216936 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:23.216947 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:23.216958 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:23.216968 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:23.216979 | orchestrator | 2025-09-19 11:12:23.216990 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-19 11:12:23.217001 | orchestrator | Friday 19 September 2025 11:11:42 +0000 (0:00:01.338) 0:06:00.459 ****** 2025-09-19 11:12:23.217012 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:12:23.217022 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:12:23.217033 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:12:23.217044 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:12:23.217054 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:12:23.217065 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:12:23.217076 | orchestrator | changed: [testbed-manager] 2025-09-19 11:12:23.217087 | orchestrator | 2025-09-19 11:12:23.217097 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-19 11:12:23.217108 | orchestrator | Friday 19 September 2025 11:11:42 +0000 (0:00:00.706) 0:06:01.165 ****** 2025-09-19 11:12:23.217119 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:23.217129 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:23.217140 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:23.217151 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:23.217161 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:23.217172 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:23.217182 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:23.217193 | orchestrator | 2025-09-19 11:12:23.217204 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-19 11:12:23.217214 | orchestrator | Friday 19 September 2025 11:11:53 +0000 (0:00:10.745) 0:06:11.911 ****** 2025-09-19 11:12:23.217225 | orchestrator | changed: [testbed-manager] 2025-09-19 11:12:23.217236 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:23.217267 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:23.217278 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:23.217289 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:23.217300 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:23.217310 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:23.217321 | orchestrator | 2025-09-19 11:12:23.217332 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-19 11:12:23.217343 | orchestrator | Friday 19 September 2025 11:11:54 +0000 (0:00:00.981) 0:06:12.893 ****** 2025-09-19 11:12:23.217366 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:23.217377 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:23.217388 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:23.217407 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:23.217425 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:23.217443 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:23.217489 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:23.217507 | orchestrator | 2025-09-19 11:12:23.217526 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-19 11:12:23.217543 | orchestrator | Friday 19 September 2025 11:12:04 +0000 (0:00:09.690) 0:06:22.583 ****** 2025-09-19 11:12:23.217561 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:23.217581 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:23.217599 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:23.217618 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:23.217636 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:23.217653 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:23.217672 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:23.217690 | orchestrator | 2025-09-19 11:12:23.217710 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-19 11:12:23.217729 | orchestrator | Friday 19 September 2025 11:12:15 +0000 (0:00:11.716) 0:06:34.299 ****** 2025-09-19 11:12:23.217747 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-19 11:12:23.217762 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-19 11:12:23.217773 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-19 11:12:23.217783 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-19 11:12:23.217794 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-19 11:12:23.217805 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-19 11:12:23.217815 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-19 11:12:23.217826 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-19 11:12:23.217836 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-19 11:12:23.217847 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-19 11:12:23.217857 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-19 11:12:23.217868 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-19 11:12:23.217878 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-19 11:12:23.217888 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-19 11:12:23.217899 | orchestrator | 2025-09-19 11:12:23.217910 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-19 11:12:23.217928 | orchestrator | Friday 19 September 2025 11:12:17 +0000 (0:00:01.471) 0:06:35.771 ****** 2025-09-19 11:12:23.217939 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:12:23.217949 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:12:23.217960 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:12:23.217970 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:12:23.217981 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:12:23.217991 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:12:23.218002 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:12:23.218012 | orchestrator | 2025-09-19 11:12:23.218080 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-19 11:12:23.218092 | orchestrator | Friday 19 September 2025 11:12:17 +0000 (0:00:00.563) 0:06:36.334 ****** 2025-09-19 11:12:23.218102 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:23.218113 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:23.218123 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:23.218134 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:23.218144 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:23.218155 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:23.218165 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:23.218175 | orchestrator | 2025-09-19 11:12:23.218186 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-19 11:12:23.218211 | orchestrator | Friday 19 September 2025 11:12:22 +0000 (0:00:04.264) 0:06:40.599 ****** 2025-09-19 11:12:23.218222 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:12:23.218232 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:12:23.218242 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:12:23.218253 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:12:23.218263 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:12:23.218273 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:12:23.218284 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:12:23.218294 | orchestrator | 2025-09-19 11:12:23.218306 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-19 11:12:23.218317 | orchestrator | Friday 19 September 2025 11:12:22 +0000 (0:00:00.578) 0:06:41.178 ****** 2025-09-19 11:12:23.218327 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-19 11:12:23.218338 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-19 11:12:23.218348 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:12:23.218359 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-19 11:12:23.218369 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-19 11:12:23.218379 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:12:23.218390 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-19 11:12:23.218400 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-19 11:12:23.218410 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-19 11:12:23.218421 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-19 11:12:23.218432 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:12:23.218452 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-19 11:12:44.698679 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-19 11:12:44.698792 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:12:44.698807 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-19 11:12:44.698818 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-19 11:12:44.698829 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:12:44.698841 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:12:44.698852 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-19 11:12:44.698862 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-19 11:12:44.698873 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:12:44.698885 | orchestrator | 2025-09-19 11:12:44.698897 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-19 11:12:44.698910 | orchestrator | Friday 19 September 2025 11:12:23 +0000 (0:00:00.595) 0:06:41.773 ****** 2025-09-19 11:12:44.698921 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:12:44.698932 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:12:44.698943 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:12:44.698955 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:12:44.698965 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:12:44.698976 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:12:44.698987 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:12:44.698998 | orchestrator | 2025-09-19 11:12:44.699009 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-19 11:12:44.699021 | orchestrator | Friday 19 September 2025 11:12:23 +0000 (0:00:00.551) 0:06:42.325 ****** 2025-09-19 11:12:44.699032 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:12:44.699043 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:12:44.699054 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:12:44.699064 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:12:44.699079 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:12:44.699097 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:12:44.699148 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:12:44.699167 | orchestrator | 2025-09-19 11:12:44.699184 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-19 11:12:44.699202 | orchestrator | Friday 19 September 2025 11:12:24 +0000 (0:00:00.574) 0:06:42.900 ****** 2025-09-19 11:12:44.699218 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:12:44.699237 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:12:44.699255 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:12:44.699274 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:12:44.699293 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:12:44.699312 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:12:44.699328 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:12:44.699340 | orchestrator | 2025-09-19 11:12:44.699352 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-19 11:12:44.699364 | orchestrator | Friday 19 September 2025 11:12:25 +0000 (0:00:00.885) 0:06:43.785 ****** 2025-09-19 11:12:44.699376 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:44.699388 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:44.699400 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:44.699412 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:44.699423 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:44.699458 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:44.699469 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:44.699480 | orchestrator | 2025-09-19 11:12:44.699491 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-19 11:12:44.699502 | orchestrator | Friday 19 September 2025 11:12:27 +0000 (0:00:01.954) 0:06:45.740 ****** 2025-09-19 11:12:44.699514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:12:44.699528 | orchestrator | 2025-09-19 11:12:44.699539 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-19 11:12:44.699550 | orchestrator | Friday 19 September 2025 11:12:28 +0000 (0:00:00.933) 0:06:46.674 ****** 2025-09-19 11:12:44.699560 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:44.699571 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:44.699582 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:44.699592 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:44.699603 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:44.699614 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:44.699624 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:44.699635 | orchestrator | 2025-09-19 11:12:44.699645 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-19 11:12:44.699656 | orchestrator | Friday 19 September 2025 11:12:29 +0000 (0:00:00.972) 0:06:47.647 ****** 2025-09-19 11:12:44.699667 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:44.699678 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:44.699688 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:44.699699 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:44.699709 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:44.699720 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:44.699730 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:44.699741 | orchestrator | 2025-09-19 11:12:44.699752 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-19 11:12:44.699763 | orchestrator | Friday 19 September 2025 11:12:30 +0000 (0:00:01.255) 0:06:48.902 ****** 2025-09-19 11:12:44.699774 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:44.699784 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:44.699795 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:44.699805 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:44.699816 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:44.699826 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:44.699849 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:44.699860 | orchestrator | 2025-09-19 11:12:44.699871 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-19 11:12:44.699882 | orchestrator | Friday 19 September 2025 11:12:32 +0000 (0:00:01.492) 0:06:50.395 ****** 2025-09-19 11:12:44.699893 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:12:44.699903 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:44.699932 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:44.699943 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:44.699954 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:44.699965 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:44.699976 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:44.699986 | orchestrator | 2025-09-19 11:12:44.699997 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-19 11:12:44.700008 | orchestrator | Friday 19 September 2025 11:12:33 +0000 (0:00:01.511) 0:06:51.906 ****** 2025-09-19 11:12:44.700019 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:44.700030 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:44.700041 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:44.700052 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:44.700062 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:44.700073 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:44.700084 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:44.700095 | orchestrator | 2025-09-19 11:12:44.700106 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-19 11:12:44.700116 | orchestrator | Friday 19 September 2025 11:12:35 +0000 (0:00:01.483) 0:06:53.390 ****** 2025-09-19 11:12:44.700127 | orchestrator | changed: [testbed-manager] 2025-09-19 11:12:44.700138 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:12:44.700148 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:12:44.700159 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:12:44.700170 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:12:44.700180 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:12:44.700191 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:12:44.700202 | orchestrator | 2025-09-19 11:12:44.700212 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-19 11:12:44.700223 | orchestrator | Friday 19 September 2025 11:12:36 +0000 (0:00:01.548) 0:06:54.938 ****** 2025-09-19 11:12:44.700282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:12:44.700295 | orchestrator | 2025-09-19 11:12:44.700306 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-19 11:12:44.700317 | orchestrator | Friday 19 September 2025 11:12:37 +0000 (0:00:01.275) 0:06:56.214 ****** 2025-09-19 11:12:44.700328 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:44.700338 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:44.700349 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:44.700359 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:44.700370 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:44.700381 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:44.700391 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:44.700402 | orchestrator | 2025-09-19 11:12:44.700413 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-19 11:12:44.700423 | orchestrator | Friday 19 September 2025 11:12:39 +0000 (0:00:01.583) 0:06:57.798 ****** 2025-09-19 11:12:44.700451 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:44.700462 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:44.700473 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:44.700488 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:44.700499 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:44.700510 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:44.700520 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:44.700534 | orchestrator | 2025-09-19 11:12:44.700552 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-19 11:12:44.700582 | orchestrator | Friday 19 September 2025 11:12:40 +0000 (0:00:01.246) 0:06:59.044 ****** 2025-09-19 11:12:44.700601 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:44.700618 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:44.700637 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:44.700654 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:44.700672 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:44.700689 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:44.700708 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:44.700726 | orchestrator | 2025-09-19 11:12:44.700745 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-19 11:12:44.700763 | orchestrator | Friday 19 September 2025 11:12:42 +0000 (0:00:01.471) 0:07:00.516 ****** 2025-09-19 11:12:44.700783 | orchestrator | ok: [testbed-manager] 2025-09-19 11:12:44.700801 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:12:44.700819 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:12:44.700838 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:12:44.700855 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:12:44.700872 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:12:44.700890 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:12:44.700906 | orchestrator | 2025-09-19 11:12:44.700922 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-19 11:12:44.700939 | orchestrator | Friday 19 September 2025 11:12:43 +0000 (0:00:01.275) 0:07:01.792 ****** 2025-09-19 11:12:44.700956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:12:44.700975 | orchestrator | 2025-09-19 11:12:44.700993 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:12:44.701012 | orchestrator | Friday 19 September 2025 11:12:44 +0000 (0:00:00.908) 0:07:02.700 ****** 2025-09-19 11:12:44.701030 | orchestrator | 2025-09-19 11:12:44.701048 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:12:44.701065 | orchestrator | Friday 19 September 2025 11:12:44 +0000 (0:00:00.044) 0:07:02.745 ****** 2025-09-19 11:12:44.701083 | orchestrator | 2025-09-19 11:12:44.701102 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:12:44.701120 | orchestrator | Friday 19 September 2025 11:12:44 +0000 (0:00:00.044) 0:07:02.790 ****** 2025-09-19 11:12:44.701139 | orchestrator | 2025-09-19 11:12:44.701158 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:12:44.701177 | orchestrator | Friday 19 September 2025 11:12:44 +0000 (0:00:00.052) 0:07:02.843 ****** 2025-09-19 11:12:44.701189 | orchestrator | 2025-09-19 11:12:44.701200 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:12:44.701225 | orchestrator | Friday 19 September 2025 11:12:44 +0000 (0:00:00.039) 0:07:02.882 ****** 2025-09-19 11:13:12.985234 | orchestrator | 2025-09-19 11:13:12.985351 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:13:12.985367 | orchestrator | Friday 19 September 2025 11:12:44 +0000 (0:00:00.044) 0:07:02.926 ****** 2025-09-19 11:13:12.985378 | orchestrator | 2025-09-19 11:13:12.985389 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 11:13:12.985450 | orchestrator | Friday 19 September 2025 11:12:44 +0000 (0:00:00.053) 0:07:02.980 ****** 2025-09-19 11:13:12.985461 | orchestrator | 2025-09-19 11:13:12.985474 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 11:13:12.985485 | orchestrator | Friday 19 September 2025 11:12:44 +0000 (0:00:00.043) 0:07:03.023 ****** 2025-09-19 11:13:12.985497 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:12.985509 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:13:12.985520 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:13:12.985530 | orchestrator | 2025-09-19 11:13:12.985541 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-19 11:13:12.985585 | orchestrator | Friday 19 September 2025 11:12:46 +0000 (0:00:01.587) 0:07:04.610 ****** 2025-09-19 11:13:12.985597 | orchestrator | changed: [testbed-manager] 2025-09-19 11:13:12.985608 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:13:12.985619 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:13:12.985630 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:13:12.985640 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:13:12.985650 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:13:12.985661 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:13:12.985671 | orchestrator | 2025-09-19 11:13:12.985682 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-19 11:13:12.985693 | orchestrator | Friday 19 September 2025 11:12:47 +0000 (0:00:01.364) 0:07:05.974 ****** 2025-09-19 11:13:12.985704 | orchestrator | changed: [testbed-manager] 2025-09-19 11:13:12.985714 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:13:12.985725 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:13:12.985735 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:13:12.985746 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:13:12.985756 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:13:12.985767 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:13:12.985778 | orchestrator | 2025-09-19 11:13:12.985789 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-19 11:13:12.985801 | orchestrator | Friday 19 September 2025 11:12:48 +0000 (0:00:01.201) 0:07:07.176 ****** 2025-09-19 11:13:12.985812 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:13:12.985824 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:13:12.985835 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:13:12.985847 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:13:12.985859 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:13:12.985870 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:13:12.985882 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:13:12.985894 | orchestrator | 2025-09-19 11:13:12.985905 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-19 11:13:12.985918 | orchestrator | Friday 19 September 2025 11:12:51 +0000 (0:00:02.450) 0:07:09.626 ****** 2025-09-19 11:13:12.985929 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:13:12.985941 | orchestrator | 2025-09-19 11:13:12.985953 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-19 11:13:12.985965 | orchestrator | Friday 19 September 2025 11:12:51 +0000 (0:00:00.117) 0:07:09.744 ****** 2025-09-19 11:13:12.985976 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:12.985988 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:13:12.985999 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:13:12.986010 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:13:12.986083 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:13:12.986095 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:13:12.986107 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:13:12.986119 | orchestrator | 2025-09-19 11:13:12.986132 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-19 11:13:12.986144 | orchestrator | Friday 19 September 2025 11:12:52 +0000 (0:00:01.062) 0:07:10.806 ****** 2025-09-19 11:13:12.986155 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:13:12.986166 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:13:12.986177 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:13:12.986187 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:13:12.986197 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:13:12.986208 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:13:12.986218 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:13:12.986229 | orchestrator | 2025-09-19 11:13:12.986239 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-19 11:13:12.986250 | orchestrator | Friday 19 September 2025 11:12:53 +0000 (0:00:00.764) 0:07:11.570 ****** 2025-09-19 11:13:12.986262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:13:12.986285 | orchestrator | 2025-09-19 11:13:12.986296 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-19 11:13:12.986307 | orchestrator | Friday 19 September 2025 11:12:54 +0000 (0:00:00.957) 0:07:12.528 ****** 2025-09-19 11:13:12.986317 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:12.986328 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:12.986339 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:13:12.986349 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:13:12.986360 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:13:12.986370 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:13:12.986380 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:13:12.986391 | orchestrator | 2025-09-19 11:13:12.986429 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-19 11:13:12.986441 | orchestrator | Friday 19 September 2025 11:12:55 +0000 (0:00:00.856) 0:07:13.384 ****** 2025-09-19 11:13:12.986451 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-19 11:13:12.986463 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-19 11:13:12.986474 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-19 11:13:12.986502 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-19 11:13:12.986514 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-19 11:13:12.986525 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-19 11:13:12.986535 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-19 11:13:12.986546 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-19 11:13:12.986557 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-19 11:13:12.986569 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-19 11:13:12.986579 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-19 11:13:12.986590 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-19 11:13:12.986601 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-19 11:13:12.986612 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-19 11:13:12.986622 | orchestrator | 2025-09-19 11:13:12.986633 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-19 11:13:12.986644 | orchestrator | Friday 19 September 2025 11:12:57 +0000 (0:00:02.869) 0:07:16.254 ****** 2025-09-19 11:13:12.986655 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:13:12.986665 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:13:12.986676 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:13:12.986687 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:13:12.986698 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:13:12.986708 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:13:12.986719 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:13:12.986730 | orchestrator | 2025-09-19 11:13:12.986741 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-19 11:13:12.986752 | orchestrator | Friday 19 September 2025 11:12:58 +0000 (0:00:00.610) 0:07:16.865 ****** 2025-09-19 11:13:12.986764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:13:12.986776 | orchestrator | 2025-09-19 11:13:12.986787 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-19 11:13:12.986798 | orchestrator | Friday 19 September 2025 11:12:59 +0000 (0:00:00.857) 0:07:17.722 ****** 2025-09-19 11:13:12.986809 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:12.986820 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:12.986830 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:13:12.986851 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:13:12.986862 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:13:12.986872 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:13:12.986883 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:13:12.986894 | orchestrator | 2025-09-19 11:13:12.986904 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-19 11:13:12.986929 | orchestrator | Friday 19 September 2025 11:13:00 +0000 (0:00:01.359) 0:07:19.082 ****** 2025-09-19 11:13:12.986941 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:12.986951 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:12.986962 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:13:12.986973 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:13:12.986983 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:13:12.986994 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:13:12.987004 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:13:12.987015 | orchestrator | 2025-09-19 11:13:12.987026 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-19 11:13:12.987037 | orchestrator | Friday 19 September 2025 11:13:01 +0000 (0:00:00.939) 0:07:20.021 ****** 2025-09-19 11:13:12.987048 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:13:12.987058 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:13:12.987069 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:13:12.987079 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:13:12.987090 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:13:12.987101 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:13:12.987111 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:13:12.987122 | orchestrator | 2025-09-19 11:13:12.987133 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-19 11:13:12.987144 | orchestrator | Friday 19 September 2025 11:13:02 +0000 (0:00:00.563) 0:07:20.584 ****** 2025-09-19 11:13:12.987155 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:12.987165 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:12.987176 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:13:12.987186 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:13:12.987197 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:13:12.987207 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:13:12.987218 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:13:12.987228 | orchestrator | 2025-09-19 11:13:12.987239 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-19 11:13:12.987250 | orchestrator | Friday 19 September 2025 11:13:03 +0000 (0:00:01.491) 0:07:22.076 ****** 2025-09-19 11:13:12.987261 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:13:12.987274 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:13:12.987291 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:13:12.987310 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:13:12.987327 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:13:12.987345 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:13:12.987363 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:13:12.987382 | orchestrator | 2025-09-19 11:13:12.987428 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-19 11:13:12.987449 | orchestrator | Friday 19 September 2025 11:13:04 +0000 (0:00:00.587) 0:07:22.663 ****** 2025-09-19 11:13:12.987462 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:12.987473 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:13:12.987483 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:13:12.987494 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:13:12.987504 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:13:12.987515 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:13:12.987525 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:13:12.987535 | orchestrator | 2025-09-19 11:13:12.987546 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-19 11:13:12.987565 | orchestrator | Friday 19 September 2025 11:13:12 +0000 (0:00:08.656) 0:07:31.320 ****** 2025-09-19 11:13:48.341782 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:48.341900 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:13:48.341942 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:13:48.341954 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:13:48.341965 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:13:48.341976 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:13:48.341986 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:13:48.341998 | orchestrator | 2025-09-19 11:13:48.342010 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-19 11:13:48.342094 | orchestrator | Friday 19 September 2025 11:13:14 +0000 (0:00:01.409) 0:07:32.730 ****** 2025-09-19 11:13:48.342106 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:48.342117 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:13:48.342128 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:13:48.342139 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:13:48.342150 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:13:48.342161 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:13:48.342171 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:13:48.342182 | orchestrator | 2025-09-19 11:13:48.342193 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-19 11:13:48.342205 | orchestrator | Friday 19 September 2025 11:13:16 +0000 (0:00:01.897) 0:07:34.628 ****** 2025-09-19 11:13:48.342215 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:48.342226 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:13:48.342237 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:13:48.342248 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:13:48.342258 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:13:48.342269 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:13:48.342280 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:13:48.342291 | orchestrator | 2025-09-19 11:13:48.342302 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 11:13:48.342313 | orchestrator | Friday 19 September 2025 11:13:17 +0000 (0:00:01.695) 0:07:36.323 ****** 2025-09-19 11:13:48.342325 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:48.342337 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:48.342349 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:13:48.342385 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:13:48.342398 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:13:48.342410 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:13:48.342422 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:13:48.342433 | orchestrator | 2025-09-19 11:13:48.342445 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 11:13:48.342457 | orchestrator | Friday 19 September 2025 11:13:19 +0000 (0:00:01.157) 0:07:37.481 ****** 2025-09-19 11:13:48.342469 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:13:48.342481 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:13:48.342493 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:13:48.342504 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:13:48.342516 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:13:48.342528 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:13:48.342540 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:13:48.342551 | orchestrator | 2025-09-19 11:13:48.342564 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-19 11:13:48.342590 | orchestrator | Friday 19 September 2025 11:13:20 +0000 (0:00:00.900) 0:07:38.381 ****** 2025-09-19 11:13:48.342602 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:13:48.342614 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:13:48.342626 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:13:48.342638 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:13:48.342650 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:13:48.342662 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:13:48.342674 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:13:48.342685 | orchestrator | 2025-09-19 11:13:48.342696 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-19 11:13:48.342707 | orchestrator | Friday 19 September 2025 11:13:20 +0000 (0:00:00.529) 0:07:38.911 ****** 2025-09-19 11:13:48.342729 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:48.342740 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:48.342751 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:13:48.342762 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:13:48.342773 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:13:48.342783 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:13:48.342794 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:13:48.342805 | orchestrator | 2025-09-19 11:13:48.342816 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-19 11:13:48.342827 | orchestrator | Friday 19 September 2025 11:13:21 +0000 (0:00:00.704) 0:07:39.615 ****** 2025-09-19 11:13:48.342838 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:48.342848 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:48.342859 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:13:48.342870 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:13:48.342880 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:13:48.342891 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:13:48.342902 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:13:48.342912 | orchestrator | 2025-09-19 11:13:48.342923 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-19 11:13:48.342934 | orchestrator | Friday 19 September 2025 11:13:21 +0000 (0:00:00.520) 0:07:40.135 ****** 2025-09-19 11:13:48.342945 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:48.342956 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:48.342967 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:13:48.342977 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:13:48.342988 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:13:48.342998 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:13:48.343009 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:13:48.343020 | orchestrator | 2025-09-19 11:13:48.343031 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-19 11:13:48.343042 | orchestrator | Friday 19 September 2025 11:13:22 +0000 (0:00:00.524) 0:07:40.660 ****** 2025-09-19 11:13:48.343052 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:48.343063 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:48.343074 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:13:48.343084 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:13:48.343095 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:13:48.343106 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:13:48.343116 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:13:48.343127 | orchestrator | 2025-09-19 11:13:48.343138 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-19 11:13:48.343149 | orchestrator | Friday 19 September 2025 11:13:28 +0000 (0:00:05.719) 0:07:46.380 ****** 2025-09-19 11:13:48.343160 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:13:48.343190 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:13:48.343205 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:13:48.343223 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:13:48.343241 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:13:48.343260 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:13:48.343279 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:13:48.343297 | orchestrator | 2025-09-19 11:13:48.343315 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-19 11:13:48.343334 | orchestrator | Friday 19 September 2025 11:13:28 +0000 (0:00:00.549) 0:07:46.929 ****** 2025-09-19 11:13:48.343379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:13:48.343403 | orchestrator | 2025-09-19 11:13:48.343422 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-19 11:13:48.343440 | orchestrator | Friday 19 September 2025 11:13:29 +0000 (0:00:01.081) 0:07:48.010 ****** 2025-09-19 11:13:48.343459 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:48.343481 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:13:48.343492 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:48.343503 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:13:48.343514 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:13:48.343525 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:13:48.343536 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:13:48.343547 | orchestrator | 2025-09-19 11:13:48.343557 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-19 11:13:48.343568 | orchestrator | Friday 19 September 2025 11:13:31 +0000 (0:00:02.097) 0:07:50.108 ****** 2025-09-19 11:13:48.343579 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:48.343590 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:48.343601 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:13:48.343612 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:13:48.343623 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:13:48.343633 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:13:48.343644 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:13:48.343655 | orchestrator | 2025-09-19 11:13:48.343666 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-19 11:13:48.343677 | orchestrator | Friday 19 September 2025 11:13:33 +0000 (0:00:01.251) 0:07:51.359 ****** 2025-09-19 11:13:48.343688 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:48.343699 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:48.343709 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:13:48.343720 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:13:48.343731 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:13:48.343742 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:13:48.343752 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:13:48.343763 | orchestrator | 2025-09-19 11:13:48.343774 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-19 11:13:48.343785 | orchestrator | Friday 19 September 2025 11:13:34 +0000 (0:00:01.089) 0:07:52.449 ****** 2025-09-19 11:13:48.343803 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:13:48.343817 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:13:48.343828 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:13:48.343839 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:13:48.343850 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:13:48.343861 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:13:48.343872 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 11:13:48.343883 | orchestrator | 2025-09-19 11:13:48.343894 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-19 11:13:48.343905 | orchestrator | Friday 19 September 2025 11:13:35 +0000 (0:00:01.802) 0:07:54.252 ****** 2025-09-19 11:13:48.343916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:13:48.343927 | orchestrator | 2025-09-19 11:13:48.343938 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-19 11:13:48.343949 | orchestrator | Friday 19 September 2025 11:13:36 +0000 (0:00:00.847) 0:07:55.099 ****** 2025-09-19 11:13:48.343959 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:13:48.343971 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:13:48.343997 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:13:48.344009 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:13:48.344020 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:13:48.344031 | orchestrator | changed: [testbed-manager] 2025-09-19 11:13:48.344042 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:13:48.344052 | orchestrator | 2025-09-19 11:13:48.344064 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-19 11:13:48.344075 | orchestrator | Friday 19 September 2025 11:13:46 +0000 (0:00:09.729) 0:08:04.829 ****** 2025-09-19 11:13:48.344086 | orchestrator | ok: [testbed-manager] 2025-09-19 11:13:48.344097 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:13:48.344117 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:03.949717 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:03.949810 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:03.949825 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:03.949836 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:03.949848 | orchestrator | 2025-09-19 11:14:03.949860 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-19 11:14:03.949873 | orchestrator | Friday 19 September 2025 11:13:48 +0000 (0:00:01.851) 0:08:06.680 ****** 2025-09-19 11:14:03.949884 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:03.949895 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:03.949906 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:03.949916 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:03.949927 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:03.949938 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:03.949949 | orchestrator | 2025-09-19 11:14:03.949960 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-19 11:14:03.949971 | orchestrator | Friday 19 September 2025 11:13:49 +0000 (0:00:01.341) 0:08:08.021 ****** 2025-09-19 11:14:03.949982 | orchestrator | changed: [testbed-manager] 2025-09-19 11:14:03.949993 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:14:03.950004 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:14:03.950061 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:14:03.950074 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:14:03.950085 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:14:03.950096 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:14:03.950106 | orchestrator | 2025-09-19 11:14:03.950118 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-19 11:14:03.950129 | orchestrator | 2025-09-19 11:14:03.950139 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-19 11:14:03.950150 | orchestrator | Friday 19 September 2025 11:13:51 +0000 (0:00:01.578) 0:08:09.600 ****** 2025-09-19 11:14:03.950161 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:14:03.950172 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:14:03.950184 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:14:03.950195 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:14:03.950206 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:14:03.950217 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:14:03.950228 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:14:03.950238 | orchestrator | 2025-09-19 11:14:03.950249 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-19 11:14:03.950260 | orchestrator | 2025-09-19 11:14:03.950271 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-19 11:14:03.950282 | orchestrator | Friday 19 September 2025 11:13:51 +0000 (0:00:00.527) 0:08:10.128 ****** 2025-09-19 11:14:03.950293 | orchestrator | changed: [testbed-manager] 2025-09-19 11:14:03.950304 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:14:03.950315 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:14:03.950326 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:14:03.950364 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:14:03.950377 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:14:03.950388 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:14:03.950421 | orchestrator | 2025-09-19 11:14:03.950433 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-19 11:14:03.950444 | orchestrator | Friday 19 September 2025 11:13:53 +0000 (0:00:01.391) 0:08:11.520 ****** 2025-09-19 11:14:03.950455 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:03.950465 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:03.950476 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:03.950487 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:03.950497 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:03.950508 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:03.950532 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:03.950543 | orchestrator | 2025-09-19 11:14:03.950554 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-19 11:14:03.950565 | orchestrator | Friday 19 September 2025 11:13:54 +0000 (0:00:01.491) 0:08:13.011 ****** 2025-09-19 11:14:03.950576 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:14:03.950586 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:14:03.950597 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:14:03.950608 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:14:03.950618 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:14:03.950629 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:14:03.950639 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:14:03.950650 | orchestrator | 2025-09-19 11:14:03.950661 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-19 11:14:03.950672 | orchestrator | Friday 19 September 2025 11:13:55 +0000 (0:00:01.020) 0:08:14.032 ****** 2025-09-19 11:14:03.950682 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:14:03.950693 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:14:03.950704 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:14:03.950714 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:14:03.950725 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:14:03.950736 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:14:03.950746 | orchestrator | changed: [testbed-manager] 2025-09-19 11:14:03.950757 | orchestrator | 2025-09-19 11:14:03.950768 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-19 11:14:03.950778 | orchestrator | 2025-09-19 11:14:03.950789 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-19 11:14:03.950800 | orchestrator | Friday 19 September 2025 11:13:57 +0000 (0:00:01.823) 0:08:15.856 ****** 2025-09-19 11:14:03.950811 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:14:03.950823 | orchestrator | 2025-09-19 11:14:03.950834 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-19 11:14:03.950845 | orchestrator | Friday 19 September 2025 11:13:58 +0000 (0:00:01.063) 0:08:16.919 ****** 2025-09-19 11:14:03.950855 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:03.950866 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:03.950877 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:03.950888 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:03.950898 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:03.950909 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:03.950919 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:03.950930 | orchestrator | 2025-09-19 11:14:03.950941 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-19 11:14:03.950967 | orchestrator | Friday 19 September 2025 11:13:59 +0000 (0:00:00.887) 0:08:17.807 ****** 2025-09-19 11:14:03.950979 | orchestrator | changed: [testbed-manager] 2025-09-19 11:14:03.950990 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:14:03.951000 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:14:03.951011 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:14:03.951022 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:14:03.951032 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:14:03.951043 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:14:03.951054 | orchestrator | 2025-09-19 11:14:03.951073 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-19 11:14:03.951084 | orchestrator | Friday 19 September 2025 11:14:00 +0000 (0:00:01.433) 0:08:19.240 ****** 2025-09-19 11:14:03.951095 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:14:03.951106 | orchestrator | 2025-09-19 11:14:03.951117 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-19 11:14:03.951127 | orchestrator | Friday 19 September 2025 11:14:01 +0000 (0:00:01.056) 0:08:20.297 ****** 2025-09-19 11:14:03.951138 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:03.951149 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:03.951159 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:03.951170 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:03.951181 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:03.951192 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:03.951202 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:03.951213 | orchestrator | 2025-09-19 11:14:03.951224 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-19 11:14:03.951235 | orchestrator | Friday 19 September 2025 11:14:02 +0000 (0:00:00.820) 0:08:21.117 ****** 2025-09-19 11:14:03.951245 | orchestrator | changed: [testbed-manager] 2025-09-19 11:14:03.951256 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:14:03.951267 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:14:03.951277 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:14:03.951288 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:14:03.951299 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:14:03.951309 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:14:03.951320 | orchestrator | 2025-09-19 11:14:03.951331 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:14:03.951358 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-19 11:14:03.951369 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-19 11:14:03.951380 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 11:14:03.951396 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 11:14:03.951407 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 11:14:03.951418 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 11:14:03.951429 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 11:14:03.951440 | orchestrator | 2025-09-19 11:14:03.951451 | orchestrator | 2025-09-19 11:14:03.951462 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:14:03.951473 | orchestrator | Friday 19 September 2025 11:14:03 +0000 (0:00:01.157) 0:08:22.275 ****** 2025-09-19 11:14:03.951484 | orchestrator | =============================================================================== 2025-09-19 11:14:03.951495 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.47s 2025-09-19 11:14:03.951505 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.93s 2025-09-19 11:14:03.951516 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.02s 2025-09-19 11:14:03.951526 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.02s 2025-09-19 11:14:03.951545 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.80s 2025-09-19 11:14:03.951556 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.32s 2025-09-19 11:14:03.951567 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.72s 2025-09-19 11:14:03.951577 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.75s 2025-09-19 11:14:03.951588 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.73s 2025-09-19 11:14:03.951599 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.69s 2025-09-19 11:14:03.951610 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.37s 2025-09-19 11:14:03.951621 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.66s 2025-09-19 11:14:03.951631 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.39s 2025-09-19 11:14:03.951642 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.15s 2025-09-19 11:14:03.951653 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.82s 2025-09-19 11:14:03.951670 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.65s 2025-09-19 11:14:04.249553 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.84s 2025-09-19 11:14:04.249636 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------- 6.76s 2025-09-19 11:14:04.249649 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.30s 2025-09-19 11:14:04.249661 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 6.00s 2025-09-19 11:14:04.473199 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-19 11:14:04.473278 | orchestrator | + osism apply network 2025-09-19 11:14:16.639207 | orchestrator | 2025-09-19 11:14:16 | INFO  | Task 950221da-7ef3-4f18-95cc-70d48fe602b3 (network) was prepared for execution. 2025-09-19 11:14:16.639273 | orchestrator | 2025-09-19 11:14:16 | INFO  | It takes a moment until task 950221da-7ef3-4f18-95cc-70d48fe602b3 (network) has been started and output is visible here. 2025-09-19 11:14:45.951679 | orchestrator | 2025-09-19 11:14:45.951790 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-19 11:14:45.951806 | orchestrator | 2025-09-19 11:14:45.951818 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-19 11:14:45.951830 | orchestrator | Friday 19 September 2025 11:14:20 +0000 (0:00:00.247) 0:00:00.247 ****** 2025-09-19 11:14:45.951841 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:45.951854 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:45.951865 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:45.951875 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:45.951886 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:45.951896 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:45.951907 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:45.951918 | orchestrator | 2025-09-19 11:14:45.951928 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-19 11:14:45.951940 | orchestrator | Friday 19 September 2025 11:14:21 +0000 (0:00:00.731) 0:00:00.979 ****** 2025-09-19 11:14:45.951952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:14:45.951965 | orchestrator | 2025-09-19 11:14:45.951976 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-19 11:14:45.951987 | orchestrator | Friday 19 September 2025 11:14:22 +0000 (0:00:01.277) 0:00:02.257 ****** 2025-09-19 11:14:45.951998 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:45.952009 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:45.952019 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:45.952030 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:45.952065 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:45.952076 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:45.952099 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:45.952110 | orchestrator | 2025-09-19 11:14:45.952121 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-19 11:14:45.952132 | orchestrator | Friday 19 September 2025 11:14:24 +0000 (0:00:01.998) 0:00:04.255 ****** 2025-09-19 11:14:45.952143 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:45.952154 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:45.952164 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:45.952175 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:45.952186 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:45.952197 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:45.952207 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:45.952218 | orchestrator | 2025-09-19 11:14:45.952230 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-19 11:14:45.952242 | orchestrator | Friday 19 September 2025 11:14:26 +0000 (0:00:01.813) 0:00:06.069 ****** 2025-09-19 11:14:45.952254 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-19 11:14:45.952267 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-19 11:14:45.952279 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-19 11:14:45.952317 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-19 11:14:45.952330 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-19 11:14:45.952341 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-19 11:14:45.952353 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-19 11:14:45.952365 | orchestrator | 2025-09-19 11:14:45.952378 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-19 11:14:45.952390 | orchestrator | Friday 19 September 2025 11:14:27 +0000 (0:00:01.012) 0:00:07.081 ****** 2025-09-19 11:14:45.952402 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 11:14:45.952414 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 11:14:45.952426 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:14:45.952438 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 11:14:45.952450 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 11:14:45.952461 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:14:45.952473 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 11:14:45.952485 | orchestrator | 2025-09-19 11:14:45.952497 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-19 11:14:45.952509 | orchestrator | Friday 19 September 2025 11:14:31 +0000 (0:00:03.711) 0:00:10.793 ****** 2025-09-19 11:14:45.952521 | orchestrator | changed: [testbed-manager] 2025-09-19 11:14:45.952533 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:14:45.952544 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:14:45.952556 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:14:45.952567 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:14:45.952579 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:14:45.952590 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:14:45.952600 | orchestrator | 2025-09-19 11:14:45.952611 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-19 11:14:45.952622 | orchestrator | Friday 19 September 2025 11:14:32 +0000 (0:00:01.460) 0:00:12.253 ****** 2025-09-19 11:14:45.952633 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:14:45.952643 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:14:45.952654 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 11:14:45.952664 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 11:14:45.952675 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 11:14:45.952685 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 11:14:45.952696 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 11:14:45.952706 | orchestrator | 2025-09-19 11:14:45.952717 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-19 11:14:45.952728 | orchestrator | Friday 19 September 2025 11:14:34 +0000 (0:00:02.034) 0:00:14.288 ****** 2025-09-19 11:14:45.952747 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:45.952758 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:45.952768 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:45.952779 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:45.952790 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:45.952800 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:45.952811 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:45.952821 | orchestrator | 2025-09-19 11:14:45.952832 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-19 11:14:45.952859 | orchestrator | Friday 19 September 2025 11:14:35 +0000 (0:00:01.148) 0:00:15.436 ****** 2025-09-19 11:14:45.952870 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:14:45.952881 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:14:45.952891 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:14:45.952902 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:14:45.952913 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:14:45.952923 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:14:45.952933 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:14:45.952944 | orchestrator | 2025-09-19 11:14:45.952955 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-19 11:14:45.952966 | orchestrator | Friday 19 September 2025 11:14:36 +0000 (0:00:00.666) 0:00:16.103 ****** 2025-09-19 11:14:45.952976 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:45.952987 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:45.952997 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:45.953008 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:45.953019 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:45.953029 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:45.953039 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:45.953050 | orchestrator | 2025-09-19 11:14:45.953061 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-19 11:14:45.953071 | orchestrator | Friday 19 September 2025 11:14:38 +0000 (0:00:02.340) 0:00:18.443 ****** 2025-09-19 11:14:45.953082 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:14:45.953093 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:14:45.953103 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:14:45.953114 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:14:45.953124 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:14:45.953135 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:14:45.953146 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-19 11:14:45.953158 | orchestrator | 2025-09-19 11:14:45.953173 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-19 11:14:45.953184 | orchestrator | Friday 19 September 2025 11:14:39 +0000 (0:00:00.954) 0:00:19.398 ****** 2025-09-19 11:14:45.953195 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:45.953205 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:14:45.953216 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:14:45.953227 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:14:45.953237 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:14:45.953248 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:14:45.953258 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:14:45.953269 | orchestrator | 2025-09-19 11:14:45.953280 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-19 11:14:45.953290 | orchestrator | Friday 19 September 2025 11:14:41 +0000 (0:00:01.724) 0:00:21.122 ****** 2025-09-19 11:14:45.953319 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:14:45.953331 | orchestrator | 2025-09-19 11:14:45.953342 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-19 11:14:45.953360 | orchestrator | Friday 19 September 2025 11:14:42 +0000 (0:00:01.443) 0:00:22.566 ****** 2025-09-19 11:14:45.953371 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:45.953382 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:45.953393 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:45.953404 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:45.953414 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:45.953425 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:45.953436 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:45.953446 | orchestrator | 2025-09-19 11:14:45.953457 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-19 11:14:45.953468 | orchestrator | Friday 19 September 2025 11:14:43 +0000 (0:00:01.019) 0:00:23.585 ****** 2025-09-19 11:14:45.953479 | orchestrator | ok: [testbed-manager] 2025-09-19 11:14:45.953489 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:14:45.953500 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:14:45.953510 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:14:45.953521 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:14:45.953531 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:14:45.953542 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:14:45.953552 | orchestrator | 2025-09-19 11:14:45.953563 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-19 11:14:45.953574 | orchestrator | Friday 19 September 2025 11:14:44 +0000 (0:00:00.850) 0:00:24.435 ****** 2025-09-19 11:14:45.953585 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:14:45.953596 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:14:45.953607 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:14:45.953617 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:14:45.953628 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:14:45.953639 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:14:45.953649 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:14:45.953660 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:14:45.953671 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:14:45.953681 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 11:14:45.953692 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:14:45.953703 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:14:45.953713 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:14:45.953724 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 11:14:45.953735 | orchestrator | 2025-09-19 11:14:45.953752 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-19 11:15:02.884495 | orchestrator | Friday 19 September 2025 11:14:45 +0000 (0:00:01.190) 0:00:25.626 ****** 2025-09-19 11:15:02.884588 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:15:02.884604 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:15:02.884616 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:15:02.884627 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:15:02.884638 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:15:02.884649 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:15:02.884660 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:15:02.884671 | orchestrator | 2025-09-19 11:15:02.884683 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-19 11:15:02.884694 | orchestrator | Friday 19 September 2025 11:14:46 +0000 (0:00:00.657) 0:00:26.283 ****** 2025-09-19 11:15:02.884706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-5, testbed-node-2, testbed-node-4 2025-09-19 11:15:02.884741 | orchestrator | 2025-09-19 11:15:02.884753 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-19 11:15:02.884764 | orchestrator | Friday 19 September 2025 11:14:51 +0000 (0:00:04.508) 0:00:30.792 ****** 2025-09-19 11:15:02.884788 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.884801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.884813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.884824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.884834 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.884845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.884857 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:02.884868 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.884879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:02.884896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:02.884907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:02.884933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:02.884945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:02.884964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:02.884976 | orchestrator | 2025-09-19 11:15:02.884987 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-19 11:15:02.884998 | orchestrator | Friday 19 September 2025 11:14:56 +0000 (0:00:05.607) 0:00:36.400 ****** 2025-09-19 11:15:02.885009 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.885021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.885032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.885043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.885055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.885068 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:02.885081 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.885094 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-19 11:15:02.885107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:02.885119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:02.885132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:02.885145 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:02.885186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:08.366830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-19 11:15:08.366906 | orchestrator | 2025-09-19 11:15:08.366920 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-19 11:15:08.366931 | orchestrator | Friday 19 September 2025 11:15:02 +0000 (0:00:06.160) 0:00:42.560 ****** 2025-09-19 11:15:08.366941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:15:08.366950 | orchestrator | 2025-09-19 11:15:08.366959 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-19 11:15:08.366968 | orchestrator | Friday 19 September 2025 11:15:03 +0000 (0:00:01.123) 0:00:43.683 ****** 2025-09-19 11:15:08.366977 | orchestrator | ok: [testbed-manager] 2025-09-19 11:15:08.366986 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:15:08.367007 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:15:08.367017 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:15:08.367025 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:15:08.367034 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:15:08.367046 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:15:08.367054 | orchestrator | 2025-09-19 11:15:08.367063 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-19 11:15:08.367072 | orchestrator | Friday 19 September 2025 11:15:05 +0000 (0:00:01.080) 0:00:44.764 ****** 2025-09-19 11:15:08.367081 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:15:08.367090 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:15:08.367098 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:15:08.367107 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:15:08.367116 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:15:08.367124 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:15:08.367133 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:15:08.367141 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:15:08.367150 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:15:08.367159 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:15:08.367168 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:15:08.367176 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:15:08.367185 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:15:08.367194 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:15:08.367202 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:15:08.367211 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:15:08.367219 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:15:08.367242 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:15:08.367251 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:15:08.367260 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:15:08.367269 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:15:08.367322 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:15:08.367331 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:15:08.367339 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:15:08.367348 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:15:08.367357 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:15:08.367365 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:15:08.367374 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:15:08.367382 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:15:08.367391 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:15:08.367400 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 11:15:08.367410 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 11:15:08.367420 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 11:15:08.367430 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 11:15:08.367440 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:15:08.367449 | orchestrator | 2025-09-19 11:15:08.367460 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-19 11:15:08.367482 | orchestrator | Friday 19 September 2025 11:15:06 +0000 (0:00:01.812) 0:00:46.576 ****** 2025-09-19 11:15:08.367492 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:15:08.367502 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:15:08.367512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:15:08.367522 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:15:08.367532 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:15:08.367542 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:15:08.367552 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:15:08.367562 | orchestrator | 2025-09-19 11:15:08.367571 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-19 11:15:08.367581 | orchestrator | Friday 19 September 2025 11:15:07 +0000 (0:00:00.582) 0:00:47.159 ****** 2025-09-19 11:15:08.367591 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:15:08.367601 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:15:08.367612 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:15:08.367621 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:15:08.367631 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:15:08.367641 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:15:08.367651 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:15:08.367661 | orchestrator | 2025-09-19 11:15:08.367671 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:15:08.367682 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:15:08.367697 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:15:08.367708 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:15:08.367718 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:15:08.367735 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:15:08.367745 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:15:08.367755 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:15:08.367765 | orchestrator | 2025-09-19 11:15:08.367773 | orchestrator | 2025-09-19 11:15:08.367782 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:15:08.367791 | orchestrator | Friday 19 September 2025 11:15:08 +0000 (0:00:00.608) 0:00:47.767 ****** 2025-09-19 11:15:08.367800 | orchestrator | =============================================================================== 2025-09-19 11:15:08.367809 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.16s 2025-09-19 11:15:08.367817 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.61s 2025-09-19 11:15:08.367826 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.51s 2025-09-19 11:15:08.367834 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.71s 2025-09-19 11:15:08.367843 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.34s 2025-09-19 11:15:08.367852 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.03s 2025-09-19 11:15:08.367861 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.00s 2025-09-19 11:15:08.367869 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.81s 2025-09-19 11:15:08.367878 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.81s 2025-09-19 11:15:08.367886 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.72s 2025-09-19 11:15:08.367895 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.46s 2025-09-19 11:15:08.367903 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.44s 2025-09-19 11:15:08.367912 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.28s 2025-09-19 11:15:08.367920 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.19s 2025-09-19 11:15:08.367929 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.15s 2025-09-19 11:15:08.367938 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.12s 2025-09-19 11:15:08.367946 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.08s 2025-09-19 11:15:08.367955 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.02s 2025-09-19 11:15:08.367963 | orchestrator | osism.commons.network : Create required directories --------------------- 1.01s 2025-09-19 11:15:08.367972 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.95s 2025-09-19 11:15:08.564672 | orchestrator | + osism apply wireguard 2025-09-19 11:15:20.255699 | orchestrator | 2025-09-19 11:15:20 | INFO  | Task 95629c22-77ab-4215-b52d-359d3572ad2b (wireguard) was prepared for execution. 2025-09-19 11:15:20.255814 | orchestrator | 2025-09-19 11:15:20 | INFO  | It takes a moment until task 95629c22-77ab-4215-b52d-359d3572ad2b (wireguard) has been started and output is visible here. 2025-09-19 11:15:41.288187 | orchestrator | 2025-09-19 11:15:41.288353 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-19 11:15:41.288370 | orchestrator | 2025-09-19 11:15:41.288382 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-19 11:15:41.288394 | orchestrator | Friday 19 September 2025 11:15:24 +0000 (0:00:00.226) 0:00:00.226 ****** 2025-09-19 11:15:41.288407 | orchestrator | ok: [testbed-manager] 2025-09-19 11:15:41.288443 | orchestrator | 2025-09-19 11:15:41.288455 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-19 11:15:41.288467 | orchestrator | Friday 19 September 2025 11:15:26 +0000 (0:00:01.980) 0:00:02.207 ****** 2025-09-19 11:15:41.288477 | orchestrator | changed: [testbed-manager] 2025-09-19 11:15:41.288489 | orchestrator | 2025-09-19 11:15:41.288500 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-19 11:15:41.288510 | orchestrator | Friday 19 September 2025 11:15:33 +0000 (0:00:07.089) 0:00:09.296 ****** 2025-09-19 11:15:41.288522 | orchestrator | changed: [testbed-manager] 2025-09-19 11:15:41.288533 | orchestrator | 2025-09-19 11:15:41.288544 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-19 11:15:41.288555 | orchestrator | Friday 19 September 2025 11:15:34 +0000 (0:00:00.556) 0:00:09.853 ****** 2025-09-19 11:15:41.288565 | orchestrator | changed: [testbed-manager] 2025-09-19 11:15:41.288576 | orchestrator | 2025-09-19 11:15:41.288587 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-19 11:15:41.288613 | orchestrator | Friday 19 September 2025 11:15:34 +0000 (0:00:00.458) 0:00:10.311 ****** 2025-09-19 11:15:41.288624 | orchestrator | ok: [testbed-manager] 2025-09-19 11:15:41.288635 | orchestrator | 2025-09-19 11:15:41.288646 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-19 11:15:41.288656 | orchestrator | Friday 19 September 2025 11:15:35 +0000 (0:00:00.529) 0:00:10.841 ****** 2025-09-19 11:15:41.288667 | orchestrator | ok: [testbed-manager] 2025-09-19 11:15:41.288677 | orchestrator | 2025-09-19 11:15:41.288688 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-19 11:15:41.288699 | orchestrator | Friday 19 September 2025 11:15:35 +0000 (0:00:00.562) 0:00:11.403 ****** 2025-09-19 11:15:41.288710 | orchestrator | ok: [testbed-manager] 2025-09-19 11:15:41.288720 | orchestrator | 2025-09-19 11:15:41.288731 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-19 11:15:41.288743 | orchestrator | Friday 19 September 2025 11:15:36 +0000 (0:00:00.413) 0:00:11.817 ****** 2025-09-19 11:15:41.288754 | orchestrator | changed: [testbed-manager] 2025-09-19 11:15:41.288766 | orchestrator | 2025-09-19 11:15:41.288779 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-19 11:15:41.288790 | orchestrator | Friday 19 September 2025 11:15:37 +0000 (0:00:01.214) 0:00:13.032 ****** 2025-09-19 11:15:41.288802 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 11:15:41.288814 | orchestrator | changed: [testbed-manager] 2025-09-19 11:15:41.288826 | orchestrator | 2025-09-19 11:15:41.288838 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-19 11:15:41.288849 | orchestrator | Friday 19 September 2025 11:15:38 +0000 (0:00:00.941) 0:00:13.973 ****** 2025-09-19 11:15:41.288861 | orchestrator | changed: [testbed-manager] 2025-09-19 11:15:41.288872 | orchestrator | 2025-09-19 11:15:41.288884 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-19 11:15:41.288896 | orchestrator | Friday 19 September 2025 11:15:39 +0000 (0:00:01.689) 0:00:15.662 ****** 2025-09-19 11:15:41.288908 | orchestrator | changed: [testbed-manager] 2025-09-19 11:15:41.288919 | orchestrator | 2025-09-19 11:15:41.288931 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:15:41.288943 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:15:41.288956 | orchestrator | 2025-09-19 11:15:41.288968 | orchestrator | 2025-09-19 11:15:41.288979 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:15:41.288989 | orchestrator | Friday 19 September 2025 11:15:40 +0000 (0:00:00.941) 0:00:16.604 ****** 2025-09-19 11:15:41.289000 | orchestrator | =============================================================================== 2025-09-19 11:15:41.289011 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.09s 2025-09-19 11:15:41.289021 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.98s 2025-09-19 11:15:41.289039 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-09-19 11:15:41.289049 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.22s 2025-09-19 11:15:41.289060 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-09-19 11:15:41.289071 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2025-09-19 11:15:41.289081 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.56s 2025-09-19 11:15:41.289092 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-09-19 11:15:41.289103 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-09-19 11:15:41.289113 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2025-09-19 11:15:41.289124 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-09-19 11:15:41.587754 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-19 11:15:41.624461 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-19 11:15:41.624553 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-19 11:15:41.699968 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 197 0 --:--:-- --:--:-- --:--:-- 200 2025-09-19 11:15:41.720847 | orchestrator | + osism apply --environment custom workarounds 2025-09-19 11:15:43.743587 | orchestrator | 2025-09-19 11:15:43 | INFO  | Trying to run play workarounds in environment custom 2025-09-19 11:15:53.853603 | orchestrator | 2025-09-19 11:15:53 | INFO  | Task efa8e101-085d-4009-8b60-9b6e2bcd9035 (workarounds) was prepared for execution. 2025-09-19 11:15:53.853699 | orchestrator | 2025-09-19 11:15:53 | INFO  | It takes a moment until task efa8e101-085d-4009-8b60-9b6e2bcd9035 (workarounds) has been started and output is visible here. 2025-09-19 11:16:19.757312 | orchestrator | 2025-09-19 11:16:19.757428 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:16:19.757445 | orchestrator | 2025-09-19 11:16:19.757458 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-19 11:16:19.757471 | orchestrator | Friday 19 September 2025 11:15:57 +0000 (0:00:00.144) 0:00:00.144 ****** 2025-09-19 11:16:19.757482 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-19 11:16:19.757493 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-19 11:16:19.757504 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-19 11:16:19.757515 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-19 11:16:19.757541 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-19 11:16:19.757552 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-19 11:16:19.757563 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-19 11:16:19.757574 | orchestrator | 2025-09-19 11:16:19.757585 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-19 11:16:19.757596 | orchestrator | 2025-09-19 11:16:19.757606 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-19 11:16:19.757617 | orchestrator | Friday 19 September 2025 11:15:58 +0000 (0:00:00.770) 0:00:00.915 ****** 2025-09-19 11:16:19.757628 | orchestrator | ok: [testbed-manager] 2025-09-19 11:16:19.757640 | orchestrator | 2025-09-19 11:16:19.757651 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-19 11:16:19.757662 | orchestrator | 2025-09-19 11:16:19.757673 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-19 11:16:19.757684 | orchestrator | Friday 19 September 2025 11:16:00 +0000 (0:00:02.346) 0:00:03.262 ****** 2025-09-19 11:16:19.757718 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:16:19.757730 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:16:19.757740 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:16:19.757751 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:16:19.757761 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:16:19.757772 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:16:19.757783 | orchestrator | 2025-09-19 11:16:19.757793 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-19 11:16:19.757804 | orchestrator | 2025-09-19 11:16:19.757815 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-19 11:16:19.757826 | orchestrator | Friday 19 September 2025 11:16:02 +0000 (0:00:02.009) 0:00:05.272 ****** 2025-09-19 11:16:19.757837 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 11:16:19.757850 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 11:16:19.757861 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 11:16:19.757871 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 11:16:19.757882 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 11:16:19.757893 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 11:16:19.757903 | orchestrator | 2025-09-19 11:16:19.757914 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-19 11:16:19.757925 | orchestrator | Friday 19 September 2025 11:16:04 +0000 (0:00:01.562) 0:00:06.834 ****** 2025-09-19 11:16:19.757936 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:19.757946 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:19.757957 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:19.757968 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:19.757978 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:19.757989 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:19.757999 | orchestrator | 2025-09-19 11:16:19.758010 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-19 11:16:19.758071 | orchestrator | Friday 19 September 2025 11:16:08 +0000 (0:00:03.751) 0:00:10.586 ****** 2025-09-19 11:16:19.758082 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:16:19.758093 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:16:19.758103 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:16:19.758115 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:16:19.758125 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:16:19.758136 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:16:19.758146 | orchestrator | 2025-09-19 11:16:19.758157 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-19 11:16:19.758168 | orchestrator | 2025-09-19 11:16:19.758178 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-19 11:16:19.758189 | orchestrator | Friday 19 September 2025 11:16:09 +0000 (0:00:00.754) 0:00:11.341 ****** 2025-09-19 11:16:19.758199 | orchestrator | changed: [testbed-manager] 2025-09-19 11:16:19.758231 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:19.758242 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:19.758252 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:19.758262 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:19.758273 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:19.758283 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:19.758293 | orchestrator | 2025-09-19 11:16:19.758305 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-19 11:16:19.758315 | orchestrator | Friday 19 September 2025 11:16:10 +0000 (0:00:01.751) 0:00:13.092 ****** 2025-09-19 11:16:19.758326 | orchestrator | changed: [testbed-manager] 2025-09-19 11:16:19.758346 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:19.758356 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:19.758367 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:19.758377 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:19.758388 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:19.758416 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:19.758427 | orchestrator | 2025-09-19 11:16:19.758438 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-19 11:16:19.758448 | orchestrator | Friday 19 September 2025 11:16:12 +0000 (0:00:01.720) 0:00:14.812 ****** 2025-09-19 11:16:19.758459 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:16:19.758470 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:16:19.758480 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:16:19.758491 | orchestrator | ok: [testbed-manager] 2025-09-19 11:16:19.758501 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:16:19.758512 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:16:19.758522 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:16:19.758533 | orchestrator | 2025-09-19 11:16:19.758544 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-19 11:16:19.758554 | orchestrator | Friday 19 September 2025 11:16:14 +0000 (0:00:01.699) 0:00:16.512 ****** 2025-09-19 11:16:19.758572 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:19.758583 | orchestrator | changed: [testbed-manager] 2025-09-19 11:16:19.758593 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:19.758604 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:19.758615 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:19.758625 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:19.758636 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:19.758647 | orchestrator | 2025-09-19 11:16:19.758657 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-19 11:16:19.758668 | orchestrator | Friday 19 September 2025 11:16:16 +0000 (0:00:01.904) 0:00:18.417 ****** 2025-09-19 11:16:19.758679 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:16:19.758689 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:16:19.758700 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:16:19.758710 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:16:19.758721 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:16:19.758732 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:16:19.758742 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:16:19.758753 | orchestrator | 2025-09-19 11:16:19.758764 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-19 11:16:19.758774 | orchestrator | 2025-09-19 11:16:19.758785 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-19 11:16:19.758796 | orchestrator | Friday 19 September 2025 11:16:16 +0000 (0:00:00.653) 0:00:19.071 ****** 2025-09-19 11:16:19.758806 | orchestrator | ok: [testbed-manager] 2025-09-19 11:16:19.758817 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:16:19.758828 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:16:19.758838 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:16:19.758849 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:16:19.758859 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:16:19.758870 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:16:19.758881 | orchestrator | 2025-09-19 11:16:19.758891 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:16:19.758903 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:16:19.758915 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:16:19.758926 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:16:19.758936 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:16:19.758954 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:16:19.758965 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:16:19.758975 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:16:19.758986 | orchestrator | 2025-09-19 11:16:19.758997 | orchestrator | 2025-09-19 11:16:19.759008 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:16:19.759018 | orchestrator | Friday 19 September 2025 11:16:19 +0000 (0:00:02.963) 0:00:22.034 ****** 2025-09-19 11:16:19.759029 | orchestrator | =============================================================================== 2025-09-19 11:16:19.759040 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.75s 2025-09-19 11:16:19.759050 | orchestrator | Install python3-docker -------------------------------------------------- 2.96s 2025-09-19 11:16:19.759061 | orchestrator | Apply netplan configuration --------------------------------------------- 2.35s 2025-09-19 11:16:19.759072 | orchestrator | Apply netplan configuration --------------------------------------------- 2.01s 2025-09-19 11:16:19.759082 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.90s 2025-09-19 11:16:19.759093 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.75s 2025-09-19 11:16:19.759104 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.72s 2025-09-19 11:16:19.759114 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.70s 2025-09-19 11:16:19.759125 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.56s 2025-09-19 11:16:19.759136 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2025-09-19 11:16:19.759146 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.75s 2025-09-19 11:16:19.759163 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.65s 2025-09-19 11:16:20.590500 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-19 11:16:32.674397 | orchestrator | 2025-09-19 11:16:32 | INFO  | Task 24465ce6-7c18-4ad0-9158-c062699d78ac (reboot) was prepared for execution. 2025-09-19 11:16:32.674530 | orchestrator | 2025-09-19 11:16:32 | INFO  | It takes a moment until task 24465ce6-7c18-4ad0-9158-c062699d78ac (reboot) has been started and output is visible here. 2025-09-19 11:16:42.939783 | orchestrator | 2025-09-19 11:16:42.939860 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 11:16:42.939868 | orchestrator | 2025-09-19 11:16:42.939885 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 11:16:42.939891 | orchestrator | Friday 19 September 2025 11:16:36 +0000 (0:00:00.215) 0:00:00.215 ****** 2025-09-19 11:16:42.939896 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:16:42.939902 | orchestrator | 2025-09-19 11:16:42.939907 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 11:16:42.939912 | orchestrator | Friday 19 September 2025 11:16:36 +0000 (0:00:00.097) 0:00:00.313 ****** 2025-09-19 11:16:42.939918 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:16:42.939922 | orchestrator | 2025-09-19 11:16:42.939927 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 11:16:42.939932 | orchestrator | Friday 19 September 2025 11:16:37 +0000 (0:00:00.977) 0:00:01.290 ****** 2025-09-19 11:16:42.939937 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:16:42.939942 | orchestrator | 2025-09-19 11:16:42.939947 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 11:16:42.939969 | orchestrator | 2025-09-19 11:16:42.939974 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 11:16:42.939979 | orchestrator | Friday 19 September 2025 11:16:37 +0000 (0:00:00.125) 0:00:01.416 ****** 2025-09-19 11:16:42.939983 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:16:42.939988 | orchestrator | 2025-09-19 11:16:42.939993 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 11:16:42.939998 | orchestrator | Friday 19 September 2025 11:16:38 +0000 (0:00:00.113) 0:00:01.529 ****** 2025-09-19 11:16:42.940003 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:16:42.940007 | orchestrator | 2025-09-19 11:16:42.940012 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 11:16:42.940017 | orchestrator | Friday 19 September 2025 11:16:38 +0000 (0:00:00.663) 0:00:02.193 ****** 2025-09-19 11:16:42.940022 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:16:42.940027 | orchestrator | 2025-09-19 11:16:42.940032 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 11:16:42.940036 | orchestrator | 2025-09-19 11:16:42.940041 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 11:16:42.940046 | orchestrator | Friday 19 September 2025 11:16:38 +0000 (0:00:00.122) 0:00:02.315 ****** 2025-09-19 11:16:42.940051 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:16:42.940055 | orchestrator | 2025-09-19 11:16:42.940060 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 11:16:42.940065 | orchestrator | Friday 19 September 2025 11:16:39 +0000 (0:00:00.210) 0:00:02.526 ****** 2025-09-19 11:16:42.940070 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:16:42.940074 | orchestrator | 2025-09-19 11:16:42.940082 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 11:16:42.940087 | orchestrator | Friday 19 September 2025 11:16:39 +0000 (0:00:00.676) 0:00:03.203 ****** 2025-09-19 11:16:42.940092 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:16:42.940097 | orchestrator | 2025-09-19 11:16:42.940102 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 11:16:42.940107 | orchestrator | 2025-09-19 11:16:42.940112 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 11:16:42.940116 | orchestrator | Friday 19 September 2025 11:16:39 +0000 (0:00:00.132) 0:00:03.335 ****** 2025-09-19 11:16:42.940121 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:16:42.940126 | orchestrator | 2025-09-19 11:16:42.940130 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 11:16:42.940135 | orchestrator | Friday 19 September 2025 11:16:39 +0000 (0:00:00.108) 0:00:03.443 ****** 2025-09-19 11:16:42.940140 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:16:42.940145 | orchestrator | 2025-09-19 11:16:42.940150 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 11:16:42.940154 | orchestrator | Friday 19 September 2025 11:16:40 +0000 (0:00:00.689) 0:00:04.132 ****** 2025-09-19 11:16:42.940159 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:16:42.940164 | orchestrator | 2025-09-19 11:16:42.940169 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 11:16:42.940173 | orchestrator | 2025-09-19 11:16:42.940178 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 11:16:42.940225 | orchestrator | Friday 19 September 2025 11:16:40 +0000 (0:00:00.133) 0:00:04.266 ****** 2025-09-19 11:16:42.940230 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:16:42.940235 | orchestrator | 2025-09-19 11:16:42.940240 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 11:16:42.940245 | orchestrator | Friday 19 September 2025 11:16:40 +0000 (0:00:00.112) 0:00:04.379 ****** 2025-09-19 11:16:42.940249 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:16:42.940254 | orchestrator | 2025-09-19 11:16:42.940259 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 11:16:42.940264 | orchestrator | Friday 19 September 2025 11:16:41 +0000 (0:00:00.668) 0:00:05.047 ****** 2025-09-19 11:16:42.940275 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:16:42.940280 | orchestrator | 2025-09-19 11:16:42.940285 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 11:16:42.940290 | orchestrator | 2025-09-19 11:16:42.940295 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 11:16:42.940300 | orchestrator | Friday 19 September 2025 11:16:41 +0000 (0:00:00.125) 0:00:05.173 ****** 2025-09-19 11:16:42.940305 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:16:42.940309 | orchestrator | 2025-09-19 11:16:42.940314 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 11:16:42.940319 | orchestrator | Friday 19 September 2025 11:16:41 +0000 (0:00:00.120) 0:00:05.294 ****** 2025-09-19 11:16:42.940324 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:16:42.940328 | orchestrator | 2025-09-19 11:16:42.940333 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 11:16:42.940338 | orchestrator | Friday 19 September 2025 11:16:42 +0000 (0:00:00.668) 0:00:05.963 ****** 2025-09-19 11:16:42.940354 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:16:42.940360 | orchestrator | 2025-09-19 11:16:42.940365 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:16:42.940372 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:16:42.940378 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:16:42.940384 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:16:42.940390 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:16:42.940395 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:16:42.940400 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:16:42.940406 | orchestrator | 2025-09-19 11:16:42.940411 | orchestrator | 2025-09-19 11:16:42.940417 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:16:42.940423 | orchestrator | Friday 19 September 2025 11:16:42 +0000 (0:00:00.040) 0:00:06.003 ****** 2025-09-19 11:16:42.940428 | orchestrator | =============================================================================== 2025-09-19 11:16:42.940434 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.35s 2025-09-19 11:16:42.940439 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.76s 2025-09-19 11:16:42.940445 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.68s 2025-09-19 11:16:43.220459 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-19 11:16:55.237400 | orchestrator | 2025-09-19 11:16:55 | INFO  | Task c1ba82d8-661e-4c8c-9678-a9cf90e5b7e2 (wait-for-connection) was prepared for execution. 2025-09-19 11:16:55.237505 | orchestrator | 2025-09-19 11:16:55 | INFO  | It takes a moment until task c1ba82d8-661e-4c8c-9678-a9cf90e5b7e2 (wait-for-connection) has been started and output is visible here. 2025-09-19 11:17:11.199140 | orchestrator | 2025-09-19 11:17:11.199311 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-19 11:17:11.199330 | orchestrator | 2025-09-19 11:17:11.199343 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-19 11:17:11.199355 | orchestrator | Friday 19 September 2025 11:16:59 +0000 (0:00:00.247) 0:00:00.247 ****** 2025-09-19 11:17:11.199393 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:17:11.199407 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:17:11.199418 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:17:11.199428 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:17:11.199439 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:17:11.199450 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:17:11.199460 | orchestrator | 2025-09-19 11:17:11.199471 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:17:11.199484 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:17:11.199515 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:17:11.199527 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:17:11.199538 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:17:11.199549 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:17:11.199560 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:17:11.199570 | orchestrator | 2025-09-19 11:17:11.199581 | orchestrator | 2025-09-19 11:17:11.199593 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:17:11.199604 | orchestrator | Friday 19 September 2025 11:17:10 +0000 (0:00:11.590) 0:00:11.837 ****** 2025-09-19 11:17:11.199615 | orchestrator | =============================================================================== 2025-09-19 11:17:11.199625 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.59s 2025-09-19 11:17:11.510503 | orchestrator | + osism apply hddtemp 2025-09-19 11:17:23.529602 | orchestrator | 2025-09-19 11:17:23 | INFO  | Task 9dc5a816-b500-49f4-af9e-98ece25729f2 (hddtemp) was prepared for execution. 2025-09-19 11:17:23.529734 | orchestrator | 2025-09-19 11:17:23 | INFO  | It takes a moment until task 9dc5a816-b500-49f4-af9e-98ece25729f2 (hddtemp) has been started and output is visible here. 2025-09-19 11:17:52.657777 | orchestrator | 2025-09-19 11:17:52.657891 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-19 11:17:52.657908 | orchestrator | 2025-09-19 11:17:52.657921 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-19 11:17:52.657933 | orchestrator | Friday 19 September 2025 11:17:27 +0000 (0:00:00.268) 0:00:00.268 ****** 2025-09-19 11:17:52.657960 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:52.657972 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:17:52.657983 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:17:52.657994 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:17:52.658005 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:17:52.658070 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:17:52.658083 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:17:52.658094 | orchestrator | 2025-09-19 11:17:52.658105 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-19 11:17:52.658146 | orchestrator | Friday 19 September 2025 11:17:28 +0000 (0:00:00.710) 0:00:00.978 ****** 2025-09-19 11:17:52.658159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:17:52.658181 | orchestrator | 2025-09-19 11:17:52.658234 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-19 11:17:52.658247 | orchestrator | Friday 19 September 2025 11:17:29 +0000 (0:00:01.225) 0:00:02.204 ****** 2025-09-19 11:17:52.658258 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:52.658294 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:17:52.658305 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:17:52.658316 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:17:52.658327 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:17:52.658337 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:17:52.658349 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:17:52.658360 | orchestrator | 2025-09-19 11:17:52.658370 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-19 11:17:52.658381 | orchestrator | Friday 19 September 2025 11:17:31 +0000 (0:00:02.144) 0:00:04.348 ****** 2025-09-19 11:17:52.658393 | orchestrator | changed: [testbed-manager] 2025-09-19 11:17:52.658404 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:52.658415 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:52.658426 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:52.658436 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:52.658447 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:52.658458 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:52.658468 | orchestrator | 2025-09-19 11:17:52.658479 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-19 11:17:52.658490 | orchestrator | Friday 19 September 2025 11:17:33 +0000 (0:00:01.163) 0:00:05.512 ****** 2025-09-19 11:17:52.658501 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:17:52.658512 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:17:52.658522 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:17:52.658533 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:17:52.658544 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:17:52.658554 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:17:52.658565 | orchestrator | ok: [testbed-manager] 2025-09-19 11:17:52.658576 | orchestrator | 2025-09-19 11:17:52.658587 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-19 11:17:52.658598 | orchestrator | Friday 19 September 2025 11:17:34 +0000 (0:00:01.183) 0:00:06.695 ****** 2025-09-19 11:17:52.658609 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:17:52.658619 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:17:52.658630 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:17:52.658641 | orchestrator | changed: [testbed-manager] 2025-09-19 11:17:52.658651 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:17:52.658662 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:17:52.658673 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:17:52.658684 | orchestrator | 2025-09-19 11:17:52.658695 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-19 11:17:52.658705 | orchestrator | Friday 19 September 2025 11:17:35 +0000 (0:00:00.811) 0:00:07.507 ****** 2025-09-19 11:17:52.658716 | orchestrator | changed: [testbed-manager] 2025-09-19 11:17:52.658727 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:52.658737 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:52.658748 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:52.658759 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:52.658769 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:52.658780 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:52.658791 | orchestrator | 2025-09-19 11:17:52.658801 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-19 11:17:52.658812 | orchestrator | Friday 19 September 2025 11:17:48 +0000 (0:00:13.983) 0:00:21.491 ****** 2025-09-19 11:17:52.658823 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:17:52.658835 | orchestrator | 2025-09-19 11:17:52.658846 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-19 11:17:52.658857 | orchestrator | Friday 19 September 2025 11:17:50 +0000 (0:00:01.395) 0:00:22.887 ****** 2025-09-19 11:17:52.658867 | orchestrator | changed: [testbed-manager] 2025-09-19 11:17:52.658878 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:17:52.658898 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:17:52.658909 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:17:52.658920 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:17:52.658931 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:17:52.658942 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:17:52.658953 | orchestrator | 2025-09-19 11:17:52.658964 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:17:52.658975 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:17:52.659003 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:17:52.659015 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:17:52.659032 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:17:52.659044 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:17:52.659055 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:17:52.659065 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:17:52.659076 | orchestrator | 2025-09-19 11:17:52.659087 | orchestrator | 2025-09-19 11:17:52.659098 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:17:52.659109 | orchestrator | Friday 19 September 2025 11:17:52 +0000 (0:00:01.888) 0:00:24.775 ****** 2025-09-19 11:17:52.659150 | orchestrator | =============================================================================== 2025-09-19 11:17:52.659162 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.98s 2025-09-19 11:17:52.659173 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.14s 2025-09-19 11:17:52.659184 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.89s 2025-09-19 11:17:52.659194 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.40s 2025-09-19 11:17:52.659205 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.23s 2025-09-19 11:17:52.659216 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.18s 2025-09-19 11:17:52.659227 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.16s 2025-09-19 11:17:52.659237 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.81s 2025-09-19 11:17:52.659248 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.71s 2025-09-19 11:17:52.942432 | orchestrator | ++ semver 9.2.0 7.1.1 2025-09-19 11:17:53.001529 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-19 11:17:53.001617 | orchestrator | + sudo systemctl restart manager.service 2025-09-19 11:18:08.517429 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 11:18:08.517551 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-19 11:18:08.517577 | orchestrator | + local max_attempts=60 2025-09-19 11:18:08.517598 | orchestrator | + local name=ceph-ansible 2025-09-19 11:18:08.517610 | orchestrator | + local attempt_num=1 2025-09-19 11:18:08.517633 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:18:08.565092 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:18:08.565197 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:18:08.565208 | orchestrator | + sleep 5 2025-09-19 11:18:13.572367 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:18:13.607054 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:18:13.607205 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:18:13.607219 | orchestrator | + sleep 5 2025-09-19 11:18:18.610639 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:18:18.650199 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:18:18.650267 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:18:18.650280 | orchestrator | + sleep 5 2025-09-19 11:18:23.654544 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:18:23.690692 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:18:23.690770 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:18:23.690777 | orchestrator | + sleep 5 2025-09-19 11:18:28.694594 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:18:28.730188 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:18:28.730267 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:18:28.730277 | orchestrator | + sleep 5 2025-09-19 11:18:33.735976 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:18:33.775864 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:18:33.775939 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:18:33.775946 | orchestrator | + sleep 5 2025-09-19 11:18:38.780737 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:18:38.813365 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:18:38.813450 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:18:38.813463 | orchestrator | + sleep 5 2025-09-19 11:18:43.819394 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:18:43.846318 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 11:18:43.846411 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:18:43.850093 | orchestrator | + sleep 5 2025-09-19 11:18:48.854381 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:18:48.896255 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 11:18:48.896337 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:18:48.896351 | orchestrator | + sleep 5 2025-09-19 11:18:53.899218 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:18:53.931773 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 11:18:53.931854 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:18:53.931868 | orchestrator | + sleep 5 2025-09-19 11:18:58.936655 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:18:58.971500 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 11:18:58.971584 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:18:58.971599 | orchestrator | + sleep 5 2025-09-19 11:19:03.975998 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:19:04.014342 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 11:19:04.014430 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:19:04.014444 | orchestrator | + sleep 5 2025-09-19 11:19:09.019945 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:19:09.059824 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 11:19:09.059918 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 11:19:09.059932 | orchestrator | + sleep 5 2025-09-19 11:19:14.064503 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 11:19:14.099249 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:19:14.099352 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-19 11:19:14.099369 | orchestrator | + local max_attempts=60 2025-09-19 11:19:14.099383 | orchestrator | + local name=kolla-ansible 2025-09-19 11:19:14.099394 | orchestrator | + local attempt_num=1 2025-09-19 11:19:14.099424 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-19 11:19:14.131218 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:19:14.131308 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-19 11:19:14.131321 | orchestrator | + local max_attempts=60 2025-09-19 11:19:14.131335 | orchestrator | + local name=osism-ansible 2025-09-19 11:19:14.131346 | orchestrator | + local attempt_num=1 2025-09-19 11:19:14.131634 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-19 11:19:14.168100 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 11:19:14.168201 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-19 11:19:14.168246 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-19 11:19:14.345034 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-19 11:19:14.503546 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-19 11:19:14.653394 | orchestrator | ARA in osism-ansible already disabled. 2025-09-19 11:19:14.830294 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-19 11:19:14.831728 | orchestrator | + osism apply gather-facts 2025-09-19 11:19:27.041520 | orchestrator | 2025-09-19 11:19:27 | INFO  | Task 1901e92c-7d90-4ab9-81c3-9ee9ba817b5b (gather-facts) was prepared for execution. 2025-09-19 11:19:27.041631 | orchestrator | 2025-09-19 11:19:27 | INFO  | It takes a moment until task 1901e92c-7d90-4ab9-81c3-9ee9ba817b5b (gather-facts) has been started and output is visible here. 2025-09-19 11:19:40.276738 | orchestrator | 2025-09-19 11:19:40.276852 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 11:19:40.276868 | orchestrator | 2025-09-19 11:19:40.276880 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 11:19:40.276891 | orchestrator | Friday 19 September 2025 11:19:31 +0000 (0:00:00.239) 0:00:00.239 ****** 2025-09-19 11:19:40.276902 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:19:40.276915 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:19:40.276927 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:19:40.276939 | orchestrator | ok: [testbed-manager] 2025-09-19 11:19:40.276950 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:19:40.276961 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:19:40.276972 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:19:40.276982 | orchestrator | 2025-09-19 11:19:40.276994 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 11:19:40.277005 | orchestrator | 2025-09-19 11:19:40.277088 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 11:19:40.277102 | orchestrator | Friday 19 September 2025 11:19:39 +0000 (0:00:08.219) 0:00:08.458 ****** 2025-09-19 11:19:40.277113 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:19:40.277125 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:19:40.277136 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:19:40.277147 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:19:40.277158 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:19:40.277169 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:19:40.277180 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:19:40.277190 | orchestrator | 2025-09-19 11:19:40.277202 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:19:40.277213 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:19:40.277226 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:19:40.277237 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:19:40.277248 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:19:40.277259 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:19:40.277270 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:19:40.277281 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:19:40.277294 | orchestrator | 2025-09-19 11:19:40.277307 | orchestrator | 2025-09-19 11:19:40.277319 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:19:40.277332 | orchestrator | Friday 19 September 2025 11:19:39 +0000 (0:00:00.518) 0:00:08.977 ****** 2025-09-19 11:19:40.277374 | orchestrator | =============================================================================== 2025-09-19 11:19:40.277387 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.22s 2025-09-19 11:19:40.277399 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-09-19 11:19:40.600974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-19 11:19:40.612793 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-19 11:19:40.626966 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-19 11:19:40.636897 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-19 11:19:40.653759 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-19 11:19:40.664245 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-19 11:19:40.675116 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-19 11:19:40.685965 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-19 11:19:40.698637 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-19 11:19:40.711868 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-19 11:19:40.729220 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-19 11:19:40.745373 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-19 11:19:40.762801 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-19 11:19:40.775068 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-19 11:19:40.788099 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-19 11:19:40.809146 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-19 11:19:40.829133 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-19 11:19:40.839438 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-19 11:19:40.849881 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-19 11:19:40.860261 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-19 11:19:40.870398 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-19 11:19:41.032994 | orchestrator | ok: Runtime: 0:23:24.703546 2025-09-19 11:19:41.153188 | 2025-09-19 11:19:41.153320 | TASK [Deploy services] 2025-09-19 11:19:41.686143 | orchestrator | skipping: Conditional result was False 2025-09-19 11:19:41.710230 | 2025-09-19 11:19:41.710441 | TASK [Deploy in a nutshell] 2025-09-19 11:19:42.438693 | orchestrator | + set -e 2025-09-19 11:19:42.440309 | orchestrator | 2025-09-19 11:19:42.440364 | orchestrator | # PULL IMAGES 2025-09-19 11:19:42.440380 | orchestrator | 2025-09-19 11:19:42.440402 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 11:19:42.440424 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 11:19:42.440439 | orchestrator | ++ INTERACTIVE=false 2025-09-19 11:19:42.440482 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 11:19:42.440504 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 11:19:42.440519 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 11:19:42.440530 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 11:19:42.440550 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 11:19:42.440561 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 11:19:42.440579 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 11:19:42.440591 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 11:19:42.440623 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 11:19:42.440636 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 11:19:42.440650 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 11:19:42.440661 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 11:19:42.440674 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 11:19:42.440685 | orchestrator | ++ export ARA=false 2025-09-19 11:19:42.440696 | orchestrator | ++ ARA=false 2025-09-19 11:19:42.440707 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 11:19:42.440718 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 11:19:42.440729 | orchestrator | ++ export TEMPEST=false 2025-09-19 11:19:42.440740 | orchestrator | ++ TEMPEST=false 2025-09-19 11:19:42.440751 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 11:19:42.440762 | orchestrator | ++ IS_ZUUL=true 2025-09-19 11:19:42.440773 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-09-19 11:19:42.440784 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-09-19 11:19:42.440795 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 11:19:42.440806 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 11:19:42.440817 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 11:19:42.440829 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 11:19:42.440839 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 11:19:42.440850 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 11:19:42.440861 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 11:19:42.440880 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 11:19:42.440892 | orchestrator | + echo 2025-09-19 11:19:42.440903 | orchestrator | + echo '# PULL IMAGES' 2025-09-19 11:19:42.440914 | orchestrator | + echo 2025-09-19 11:19:42.440933 | orchestrator | ++ semver 9.2.0 7.0.0 2025-09-19 11:19:42.505670 | orchestrator | + [[ 1 -ge 0 ]] 2025-09-19 11:19:42.505771 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-19 11:19:44.379327 | orchestrator | 2025-09-19 11:19:44 | INFO  | Trying to run play pull-images in environment custom 2025-09-19 11:19:54.459984 | orchestrator | 2025-09-19 11:19:54 | INFO  | Task 36a1841d-a4b0-4f4f-9ad3-08f90dfa9d09 (pull-images) was prepared for execution. 2025-09-19 11:19:54.460115 | orchestrator | 2025-09-19 11:19:54 | INFO  | Task 36a1841d-a4b0-4f4f-9ad3-08f90dfa9d09 is running in background. No more output. Check ARA for logs. 2025-09-19 11:19:56.349934 | orchestrator | 2025-09-19 11:19:56 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-19 11:20:06.623639 | orchestrator | 2025-09-19 11:20:06 | INFO  | Task ed9282aa-ac4a-463f-85f2-ffb33dadc904 (wipe-partitions) was prepared for execution. 2025-09-19 11:20:06.623727 | orchestrator | 2025-09-19 11:20:06 | INFO  | It takes a moment until task ed9282aa-ac4a-463f-85f2-ffb33dadc904 (wipe-partitions) has been started and output is visible here. 2025-09-19 11:20:18.596083 | orchestrator | 2025-09-19 11:20:18.596212 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-19 11:20:18.596229 | orchestrator | 2025-09-19 11:20:18.596260 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-19 11:20:18.596292 | orchestrator | Friday 19 September 2025 11:20:10 +0000 (0:00:00.123) 0:00:00.123 ****** 2025-09-19 11:20:18.596305 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:20:18.596317 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:20:18.596328 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:20:18.596339 | orchestrator | 2025-09-19 11:20:18.596351 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-19 11:20:18.596387 | orchestrator | Friday 19 September 2025 11:20:10 +0000 (0:00:00.551) 0:00:00.674 ****** 2025-09-19 11:20:18.596399 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:18.596410 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:20:18.596420 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:20:18.596435 | orchestrator | 2025-09-19 11:20:18.596446 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-19 11:20:18.596458 | orchestrator | Friday 19 September 2025 11:20:11 +0000 (0:00:00.249) 0:00:00.924 ****** 2025-09-19 11:20:18.596469 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:20:18.596480 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:20:18.596491 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:20:18.596501 | orchestrator | 2025-09-19 11:20:18.596512 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-19 11:20:18.596523 | orchestrator | Friday 19 September 2025 11:20:11 +0000 (0:00:00.667) 0:00:01.592 ****** 2025-09-19 11:20:18.596534 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:18.596545 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:20:18.596557 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:20:18.596570 | orchestrator | 2025-09-19 11:20:18.596582 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-19 11:20:18.596594 | orchestrator | Friday 19 September 2025 11:20:12 +0000 (0:00:00.239) 0:00:01.832 ****** 2025-09-19 11:20:18.596606 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 11:20:18.596622 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 11:20:18.596635 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 11:20:18.596648 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 11:20:18.596660 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 11:20:18.596673 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 11:20:18.596684 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 11:20:18.596696 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 11:20:18.596708 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 11:20:18.596721 | orchestrator | 2025-09-19 11:20:18.596732 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-19 11:20:18.596744 | orchestrator | Friday 19 September 2025 11:20:13 +0000 (0:00:01.193) 0:00:03.025 ****** 2025-09-19 11:20:18.596755 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 11:20:18.596766 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 11:20:18.596776 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 11:20:18.596787 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 11:20:18.596798 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 11:20:18.596808 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 11:20:18.596819 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 11:20:18.596829 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 11:20:18.596845 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 11:20:18.596863 | orchestrator | 2025-09-19 11:20:18.596882 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-19 11:20:18.596898 | orchestrator | Friday 19 September 2025 11:20:14 +0000 (0:00:01.372) 0:00:04.398 ****** 2025-09-19 11:20:18.596915 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 11:20:18.596930 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 11:20:18.596946 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 11:20:18.596965 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 11:20:18.597009 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 11:20:18.597028 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 11:20:18.597045 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 11:20:18.597063 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 11:20:18.597104 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 11:20:18.597121 | orchestrator | 2025-09-19 11:20:18.597138 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-19 11:20:18.597156 | orchestrator | Friday 19 September 2025 11:20:16 +0000 (0:00:02.333) 0:00:06.731 ****** 2025-09-19 11:20:18.597175 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:20:18.597192 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:20:18.597210 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:20:18.597228 | orchestrator | 2025-09-19 11:20:18.597243 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-19 11:20:18.597254 | orchestrator | Friday 19 September 2025 11:20:17 +0000 (0:00:00.626) 0:00:07.357 ****** 2025-09-19 11:20:18.597264 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:20:18.597275 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:20:18.597285 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:20:18.597296 | orchestrator | 2025-09-19 11:20:18.597306 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:20:18.597318 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:20:18.597332 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:20:18.597363 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:20:18.597374 | orchestrator | 2025-09-19 11:20:18.597385 | orchestrator | 2025-09-19 11:20:18.597396 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:20:18.597407 | orchestrator | Friday 19 September 2025 11:20:18 +0000 (0:00:00.653) 0:00:08.011 ****** 2025-09-19 11:20:18.597417 | orchestrator | =============================================================================== 2025-09-19 11:20:18.597428 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.33s 2025-09-19 11:20:18.597439 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.37s 2025-09-19 11:20:18.597450 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2025-09-19 11:20:18.597460 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.67s 2025-09-19 11:20:18.597471 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2025-09-19 11:20:18.597482 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2025-09-19 11:20:18.597492 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.55s 2025-09-19 11:20:18.597503 | orchestrator | Remove all rook related logical devices --------------------------------- 0.25s 2025-09-19 11:20:18.597514 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2025-09-19 11:20:30.698457 | orchestrator | 2025-09-19 11:20:30 | INFO  | Task 3c1a1bbb-ebdc-4756-b1d1-395f849ade4a (facts) was prepared for execution. 2025-09-19 11:20:30.698571 | orchestrator | 2025-09-19 11:20:30 | INFO  | It takes a moment until task 3c1a1bbb-ebdc-4756-b1d1-395f849ade4a (facts) has been started and output is visible here. 2025-09-19 11:20:42.397493 | orchestrator | 2025-09-19 11:20:42.397609 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 11:20:42.397624 | orchestrator | 2025-09-19 11:20:42.397637 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 11:20:42.397648 | orchestrator | Friday 19 September 2025 11:20:34 +0000 (0:00:00.274) 0:00:00.274 ****** 2025-09-19 11:20:42.397659 | orchestrator | ok: [testbed-manager] 2025-09-19 11:20:42.397671 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:20:42.397682 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:20:42.397693 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:20:42.397728 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:20:42.397739 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:20:42.397750 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:20:42.397760 | orchestrator | 2025-09-19 11:20:42.397771 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 11:20:42.397782 | orchestrator | Friday 19 September 2025 11:20:35 +0000 (0:00:01.065) 0:00:01.339 ****** 2025-09-19 11:20:42.397793 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:20:42.397804 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:20:42.397815 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:20:42.397825 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:20:42.397836 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:42.397846 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:20:42.397857 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:20:42.397868 | orchestrator | 2025-09-19 11:20:42.397879 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 11:20:42.397889 | orchestrator | 2025-09-19 11:20:42.397915 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 11:20:42.397926 | orchestrator | Friday 19 September 2025 11:20:37 +0000 (0:00:01.243) 0:00:02.583 ****** 2025-09-19 11:20:42.397937 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:20:42.397948 | orchestrator | ok: [testbed-manager] 2025-09-19 11:20:42.398007 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:20:42.398084 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:20:42.398100 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:20:42.398113 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:20:42.398124 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:20:42.398134 | orchestrator | 2025-09-19 11:20:42.398145 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 11:20:42.398157 | orchestrator | 2025-09-19 11:20:42.398167 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 11:20:42.398178 | orchestrator | Friday 19 September 2025 11:20:41 +0000 (0:00:04.484) 0:00:07.068 ****** 2025-09-19 11:20:42.398189 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:20:42.398200 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:20:42.398211 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:20:42.398221 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:20:42.398232 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:42.398243 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:20:42.398253 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:20:42.398264 | orchestrator | 2025-09-19 11:20:42.398275 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:20:42.398286 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:20:42.398298 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:20:42.398309 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:20:42.398320 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:20:42.398331 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:20:42.398342 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:20:42.398353 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:20:42.398364 | orchestrator | 2025-09-19 11:20:42.398375 | orchestrator | 2025-09-19 11:20:42.398386 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:20:42.398408 | orchestrator | Friday 19 September 2025 11:20:42 +0000 (0:00:00.510) 0:00:07.578 ****** 2025-09-19 11:20:42.398419 | orchestrator | =============================================================================== 2025-09-19 11:20:42.398430 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.48s 2025-09-19 11:20:42.398441 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-09-19 11:20:42.398452 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.07s 2025-09-19 11:20:42.398463 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-09-19 11:20:44.542942 | orchestrator | 2025-09-19 11:20:44 | INFO  | Task d46a36af-7784-4bf3-a5f8-2826a2eb7ff3 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-19 11:20:44.543100 | orchestrator | 2025-09-19 11:20:44 | INFO  | It takes a moment until task d46a36af-7784-4bf3-a5f8-2826a2eb7ff3 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-19 11:20:56.352318 | orchestrator | 2025-09-19 11:20:56.352450 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 11:20:56.352468 | orchestrator | 2025-09-19 11:20:56.352481 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 11:20:56.352493 | orchestrator | Friday 19 September 2025 11:20:48 +0000 (0:00:00.363) 0:00:00.363 ****** 2025-09-19 11:20:56.352505 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:20:56.352516 | orchestrator | 2025-09-19 11:20:56.352527 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 11:20:56.352537 | orchestrator | Friday 19 September 2025 11:20:49 +0000 (0:00:00.251) 0:00:00.615 ****** 2025-09-19 11:20:56.352548 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:20:56.352559 | orchestrator | 2025-09-19 11:20:56.352570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.352581 | orchestrator | Friday 19 September 2025 11:20:49 +0000 (0:00:00.214) 0:00:00.829 ****** 2025-09-19 11:20:56.352592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-19 11:20:56.352603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-19 11:20:56.352625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-19 11:20:56.352637 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-19 11:20:56.352648 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-19 11:20:56.352658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-19 11:20:56.352669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-19 11:20:56.352680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-19 11:20:56.352690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-19 11:20:56.352701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-19 11:20:56.352712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-19 11:20:56.352722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-19 11:20:56.352748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-19 11:20:56.352760 | orchestrator | 2025-09-19 11:20:56.352771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.352782 | orchestrator | Friday 19 September 2025 11:20:49 +0000 (0:00:00.346) 0:00:01.176 ****** 2025-09-19 11:20:56.352792 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.352803 | orchestrator | 2025-09-19 11:20:56.352836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.352850 | orchestrator | Friday 19 September 2025 11:20:50 +0000 (0:00:00.465) 0:00:01.641 ****** 2025-09-19 11:20:56.352862 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.352874 | orchestrator | 2025-09-19 11:20:56.352886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.352898 | orchestrator | Friday 19 September 2025 11:20:50 +0000 (0:00:00.233) 0:00:01.875 ****** 2025-09-19 11:20:56.352910 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.352922 | orchestrator | 2025-09-19 11:20:56.352935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.352985 | orchestrator | Friday 19 September 2025 11:20:50 +0000 (0:00:00.195) 0:00:02.070 ****** 2025-09-19 11:20:56.352998 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.353010 | orchestrator | 2025-09-19 11:20:56.353026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.353038 | orchestrator | Friday 19 September 2025 11:20:50 +0000 (0:00:00.192) 0:00:02.263 ****** 2025-09-19 11:20:56.353051 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.353062 | orchestrator | 2025-09-19 11:20:56.353075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.353088 | orchestrator | Friday 19 September 2025 11:20:50 +0000 (0:00:00.188) 0:00:02.452 ****** 2025-09-19 11:20:56.353100 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.353112 | orchestrator | 2025-09-19 11:20:56.353125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.353137 | orchestrator | Friday 19 September 2025 11:20:51 +0000 (0:00:00.222) 0:00:02.674 ****** 2025-09-19 11:20:56.353149 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.353161 | orchestrator | 2025-09-19 11:20:56.353173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.353185 | orchestrator | Friday 19 September 2025 11:20:51 +0000 (0:00:00.195) 0:00:02.869 ****** 2025-09-19 11:20:56.353197 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.353210 | orchestrator | 2025-09-19 11:20:56.353222 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.353232 | orchestrator | Friday 19 September 2025 11:20:51 +0000 (0:00:00.200) 0:00:03.070 ****** 2025-09-19 11:20:56.353243 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10) 2025-09-19 11:20:56.353255 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10) 2025-09-19 11:20:56.353266 | orchestrator | 2025-09-19 11:20:56.353276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.353287 | orchestrator | Friday 19 September 2025 11:20:51 +0000 (0:00:00.396) 0:00:03.467 ****** 2025-09-19 11:20:56.353313 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_729b54dd-f4c1-4a98-9e39-7aa2dbdf058c) 2025-09-19 11:20:56.353324 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_729b54dd-f4c1-4a98-9e39-7aa2dbdf058c) 2025-09-19 11:20:56.353335 | orchestrator | 2025-09-19 11:20:56.353346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.353363 | orchestrator | Friday 19 September 2025 11:20:52 +0000 (0:00:00.425) 0:00:03.892 ****** 2025-09-19 11:20:56.353374 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ff354216-c1d2-4110-b9e3-f4cf06b21a62) 2025-09-19 11:20:56.353385 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ff354216-c1d2-4110-b9e3-f4cf06b21a62) 2025-09-19 11:20:56.353395 | orchestrator | 2025-09-19 11:20:56.353406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.353416 | orchestrator | Friday 19 September 2025 11:20:52 +0000 (0:00:00.605) 0:00:04.498 ****** 2025-09-19 11:20:56.353427 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2859ea6e-5cf3-4595-8353-f67711d21d4e) 2025-09-19 11:20:56.353446 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2859ea6e-5cf3-4595-8353-f67711d21d4e) 2025-09-19 11:20:56.353457 | orchestrator | 2025-09-19 11:20:56.353468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:20:56.353479 | orchestrator | Friday 19 September 2025 11:20:53 +0000 (0:00:00.627) 0:00:05.126 ****** 2025-09-19 11:20:56.353489 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 11:20:56.353500 | orchestrator | 2025-09-19 11:20:56.353510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:20:56.353521 | orchestrator | Friday 19 September 2025 11:20:54 +0000 (0:00:00.779) 0:00:05.906 ****** 2025-09-19 11:20:56.353531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-19 11:20:56.353542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-19 11:20:56.353552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-19 11:20:56.353563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-19 11:20:56.353573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-19 11:20:56.353584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-19 11:20:56.353594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-19 11:20:56.353605 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-19 11:20:56.353615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-19 11:20:56.353625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-19 11:20:56.353636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-19 11:20:56.353647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-19 11:20:56.353657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-19 11:20:56.353668 | orchestrator | 2025-09-19 11:20:56.353679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:20:56.353690 | orchestrator | Friday 19 September 2025 11:20:54 +0000 (0:00:00.377) 0:00:06.283 ****** 2025-09-19 11:20:56.353700 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.353711 | orchestrator | 2025-09-19 11:20:56.353721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:20:56.353732 | orchestrator | Friday 19 September 2025 11:20:54 +0000 (0:00:00.195) 0:00:06.478 ****** 2025-09-19 11:20:56.353742 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.353753 | orchestrator | 2025-09-19 11:20:56.353764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:20:56.353774 | orchestrator | Friday 19 September 2025 11:20:55 +0000 (0:00:00.204) 0:00:06.683 ****** 2025-09-19 11:20:56.353785 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.353796 | orchestrator | 2025-09-19 11:20:56.353806 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:20:56.353817 | orchestrator | Friday 19 September 2025 11:20:55 +0000 (0:00:00.204) 0:00:06.887 ****** 2025-09-19 11:20:56.353827 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.353838 | orchestrator | 2025-09-19 11:20:56.353848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:20:56.353859 | orchestrator | Friday 19 September 2025 11:20:55 +0000 (0:00:00.211) 0:00:07.099 ****** 2025-09-19 11:20:56.353870 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.353880 | orchestrator | 2025-09-19 11:20:56.353891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:20:56.353908 | orchestrator | Friday 19 September 2025 11:20:55 +0000 (0:00:00.206) 0:00:07.305 ****** 2025-09-19 11:20:56.353918 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.353929 | orchestrator | 2025-09-19 11:20:56.353939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:20:56.353969 | orchestrator | Friday 19 September 2025 11:20:55 +0000 (0:00:00.198) 0:00:07.504 ****** 2025-09-19 11:20:56.353980 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:20:56.353991 | orchestrator | 2025-09-19 11:20:56.354001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:20:56.354082 | orchestrator | Friday 19 September 2025 11:20:56 +0000 (0:00:00.188) 0:00:07.692 ****** 2025-09-19 11:20:56.354107 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.934810 | orchestrator | 2025-09-19 11:21:03.934917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:03.934934 | orchestrator | Friday 19 September 2025 11:20:56 +0000 (0:00:00.247) 0:00:07.940 ****** 2025-09-19 11:21:03.934992 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-19 11:21:03.935005 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-19 11:21:03.935016 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-19 11:21:03.935027 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-19 11:21:03.935038 | orchestrator | 2025-09-19 11:21:03.935050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:03.935081 | orchestrator | Friday 19 September 2025 11:20:57 +0000 (0:00:01.010) 0:00:08.950 ****** 2025-09-19 11:21:03.935093 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.935103 | orchestrator | 2025-09-19 11:21:03.935114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:03.935125 | orchestrator | Friday 19 September 2025 11:20:57 +0000 (0:00:00.258) 0:00:09.208 ****** 2025-09-19 11:21:03.935136 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.935147 | orchestrator | 2025-09-19 11:21:03.935158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:03.935169 | orchestrator | Friday 19 September 2025 11:20:57 +0000 (0:00:00.208) 0:00:09.417 ****** 2025-09-19 11:21:03.935179 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.935190 | orchestrator | 2025-09-19 11:21:03.935201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:03.935211 | orchestrator | Friday 19 September 2025 11:20:58 +0000 (0:00:00.252) 0:00:09.669 ****** 2025-09-19 11:21:03.935222 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.935233 | orchestrator | 2025-09-19 11:21:03.935243 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 11:21:03.935254 | orchestrator | Friday 19 September 2025 11:20:58 +0000 (0:00:00.197) 0:00:09.866 ****** 2025-09-19 11:21:03.935265 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-19 11:21:03.935275 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-19 11:21:03.935286 | orchestrator | 2025-09-19 11:21:03.935297 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 11:21:03.935308 | orchestrator | Friday 19 September 2025 11:20:58 +0000 (0:00:00.171) 0:00:10.038 ****** 2025-09-19 11:21:03.935318 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.935329 | orchestrator | 2025-09-19 11:21:03.935340 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 11:21:03.935353 | orchestrator | Friday 19 September 2025 11:20:58 +0000 (0:00:00.145) 0:00:10.183 ****** 2025-09-19 11:21:03.935365 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.935377 | orchestrator | 2025-09-19 11:21:03.935390 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 11:21:03.935402 | orchestrator | Friday 19 September 2025 11:20:58 +0000 (0:00:00.127) 0:00:10.311 ****** 2025-09-19 11:21:03.935415 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.935427 | orchestrator | 2025-09-19 11:21:03.935464 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 11:21:03.935477 | orchestrator | Friday 19 September 2025 11:20:58 +0000 (0:00:00.135) 0:00:10.446 ****** 2025-09-19 11:21:03.935489 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:21:03.935502 | orchestrator | 2025-09-19 11:21:03.935516 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 11:21:03.935527 | orchestrator | Friday 19 September 2025 11:20:58 +0000 (0:00:00.143) 0:00:10.590 ****** 2025-09-19 11:21:03.935539 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f2e5a9ae-16db-5885-a5f1-5293896cd0a9'}}) 2025-09-19 11:21:03.935550 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd15bf0b7-095a-52ef-97a5-c7d3cf055ef5'}}) 2025-09-19 11:21:03.935561 | orchestrator | 2025-09-19 11:21:03.935572 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 11:21:03.935582 | orchestrator | Friday 19 September 2025 11:20:59 +0000 (0:00:00.171) 0:00:10.761 ****** 2025-09-19 11:21:03.935594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f2e5a9ae-16db-5885-a5f1-5293896cd0a9'}})  2025-09-19 11:21:03.935613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd15bf0b7-095a-52ef-97a5-c7d3cf055ef5'}})  2025-09-19 11:21:03.935625 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.935635 | orchestrator | 2025-09-19 11:21:03.935646 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 11:21:03.935657 | orchestrator | Friday 19 September 2025 11:20:59 +0000 (0:00:00.132) 0:00:10.894 ****** 2025-09-19 11:21:03.935668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f2e5a9ae-16db-5885-a5f1-5293896cd0a9'}})  2025-09-19 11:21:03.935679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd15bf0b7-095a-52ef-97a5-c7d3cf055ef5'}})  2025-09-19 11:21:03.935690 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.935700 | orchestrator | 2025-09-19 11:21:03.935711 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 11:21:03.935722 | orchestrator | Friday 19 September 2025 11:20:59 +0000 (0:00:00.141) 0:00:11.036 ****** 2025-09-19 11:21:03.935732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f2e5a9ae-16db-5885-a5f1-5293896cd0a9'}})  2025-09-19 11:21:03.935743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd15bf0b7-095a-52ef-97a5-c7d3cf055ef5'}})  2025-09-19 11:21:03.935754 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.935765 | orchestrator | 2025-09-19 11:21:03.935792 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 11:21:03.935803 | orchestrator | Friday 19 September 2025 11:20:59 +0000 (0:00:00.392) 0:00:11.428 ****** 2025-09-19 11:21:03.935814 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:21:03.935825 | orchestrator | 2025-09-19 11:21:03.935836 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 11:21:03.935846 | orchestrator | Friday 19 September 2025 11:20:59 +0000 (0:00:00.137) 0:00:11.565 ****** 2025-09-19 11:21:03.935857 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:21:03.935868 | orchestrator | 2025-09-19 11:21:03.935878 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 11:21:03.935889 | orchestrator | Friday 19 September 2025 11:21:00 +0000 (0:00:00.137) 0:00:11.703 ****** 2025-09-19 11:21:03.935900 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.935911 | orchestrator | 2025-09-19 11:21:03.935921 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 11:21:03.935932 | orchestrator | Friday 19 September 2025 11:21:00 +0000 (0:00:00.138) 0:00:11.841 ****** 2025-09-19 11:21:03.935978 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.935989 | orchestrator | 2025-09-19 11:21:03.936000 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 11:21:03.936022 | orchestrator | Friday 19 September 2025 11:21:00 +0000 (0:00:00.137) 0:00:11.978 ****** 2025-09-19 11:21:03.936033 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.936044 | orchestrator | 2025-09-19 11:21:03.936055 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 11:21:03.936065 | orchestrator | Friday 19 September 2025 11:21:00 +0000 (0:00:00.137) 0:00:12.116 ****** 2025-09-19 11:21:03.936076 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:21:03.936087 | orchestrator |  "ceph_osd_devices": { 2025-09-19 11:21:03.936097 | orchestrator |  "sdb": { 2025-09-19 11:21:03.936109 | orchestrator |  "osd_lvm_uuid": "f2e5a9ae-16db-5885-a5f1-5293896cd0a9" 2025-09-19 11:21:03.936120 | orchestrator |  }, 2025-09-19 11:21:03.936131 | orchestrator |  "sdc": { 2025-09-19 11:21:03.936142 | orchestrator |  "osd_lvm_uuid": "d15bf0b7-095a-52ef-97a5-c7d3cf055ef5" 2025-09-19 11:21:03.936153 | orchestrator |  } 2025-09-19 11:21:03.936164 | orchestrator |  } 2025-09-19 11:21:03.936175 | orchestrator | } 2025-09-19 11:21:03.936186 | orchestrator | 2025-09-19 11:21:03.936197 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 11:21:03.936208 | orchestrator | Friday 19 September 2025 11:21:00 +0000 (0:00:00.147) 0:00:12.263 ****** 2025-09-19 11:21:03.936218 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.936229 | orchestrator | 2025-09-19 11:21:03.936240 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 11:21:03.936250 | orchestrator | Friday 19 September 2025 11:21:00 +0000 (0:00:00.138) 0:00:12.402 ****** 2025-09-19 11:21:03.936267 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.936278 | orchestrator | 2025-09-19 11:21:03.936289 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 11:21:03.936300 | orchestrator | Friday 19 September 2025 11:21:00 +0000 (0:00:00.134) 0:00:12.537 ****** 2025-09-19 11:21:03.936310 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:21:03.936321 | orchestrator | 2025-09-19 11:21:03.936332 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 11:21:03.936342 | orchestrator | Friday 19 September 2025 11:21:01 +0000 (0:00:00.135) 0:00:12.673 ****** 2025-09-19 11:21:03.936353 | orchestrator | changed: [testbed-node-3] => { 2025-09-19 11:21:03.936363 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 11:21:03.936374 | orchestrator |  "ceph_osd_devices": { 2025-09-19 11:21:03.936385 | orchestrator |  "sdb": { 2025-09-19 11:21:03.936395 | orchestrator |  "osd_lvm_uuid": "f2e5a9ae-16db-5885-a5f1-5293896cd0a9" 2025-09-19 11:21:03.936406 | orchestrator |  }, 2025-09-19 11:21:03.936417 | orchestrator |  "sdc": { 2025-09-19 11:21:03.936428 | orchestrator |  "osd_lvm_uuid": "d15bf0b7-095a-52ef-97a5-c7d3cf055ef5" 2025-09-19 11:21:03.936439 | orchestrator |  } 2025-09-19 11:21:03.936449 | orchestrator |  }, 2025-09-19 11:21:03.936460 | orchestrator |  "lvm_volumes": [ 2025-09-19 11:21:03.936471 | orchestrator |  { 2025-09-19 11:21:03.936482 | orchestrator |  "data": "osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9", 2025-09-19 11:21:03.936492 | orchestrator |  "data_vg": "ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9" 2025-09-19 11:21:03.936503 | orchestrator |  }, 2025-09-19 11:21:03.936514 | orchestrator |  { 2025-09-19 11:21:03.936525 | orchestrator |  "data": "osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5", 2025-09-19 11:21:03.936535 | orchestrator |  "data_vg": "ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5" 2025-09-19 11:21:03.936546 | orchestrator |  } 2025-09-19 11:21:03.936557 | orchestrator |  ] 2025-09-19 11:21:03.936568 | orchestrator |  } 2025-09-19 11:21:03.936579 | orchestrator | } 2025-09-19 11:21:03.936589 | orchestrator | 2025-09-19 11:21:03.936600 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 11:21:03.936611 | orchestrator | Friday 19 September 2025 11:21:01 +0000 (0:00:00.202) 0:00:12.876 ****** 2025-09-19 11:21:03.936629 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:21:03.936640 | orchestrator | 2025-09-19 11:21:03.936651 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 11:21:03.936661 | orchestrator | 2025-09-19 11:21:03.936672 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 11:21:03.936683 | orchestrator | Friday 19 September 2025 11:21:03 +0000 (0:00:02.167) 0:00:15.043 ****** 2025-09-19 11:21:03.936693 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 11:21:03.936704 | orchestrator | 2025-09-19 11:21:03.936715 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 11:21:03.936725 | orchestrator | Friday 19 September 2025 11:21:03 +0000 (0:00:00.253) 0:00:15.296 ****** 2025-09-19 11:21:03.936736 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:21:03.936747 | orchestrator | 2025-09-19 11:21:03.936758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:03.936775 | orchestrator | Friday 19 September 2025 11:21:03 +0000 (0:00:00.227) 0:00:15.524 ****** 2025-09-19 11:21:11.924048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-19 11:21:11.924170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-19 11:21:11.924185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-19 11:21:11.924197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-19 11:21:11.924208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-19 11:21:11.924219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-19 11:21:11.924230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-19 11:21:11.924241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-19 11:21:11.924251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-19 11:21:11.924263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-19 11:21:11.924295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-19 11:21:11.924306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-19 11:21:11.924317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-19 11:21:11.924328 | orchestrator | 2025-09-19 11:21:11.924346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:11.924364 | orchestrator | Friday 19 September 2025 11:21:04 +0000 (0:00:00.362) 0:00:15.887 ****** 2025-09-19 11:21:11.924384 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.924404 | orchestrator | 2025-09-19 11:21:11.924424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:11.924442 | orchestrator | Friday 19 September 2025 11:21:04 +0000 (0:00:00.213) 0:00:16.101 ****** 2025-09-19 11:21:11.924461 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.924479 | orchestrator | 2025-09-19 11:21:11.924499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:11.924519 | orchestrator | Friday 19 September 2025 11:21:04 +0000 (0:00:00.188) 0:00:16.289 ****** 2025-09-19 11:21:11.924538 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.924555 | orchestrator | 2025-09-19 11:21:11.924568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:11.924580 | orchestrator | Friday 19 September 2025 11:21:04 +0000 (0:00:00.200) 0:00:16.490 ****** 2025-09-19 11:21:11.924592 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.924605 | orchestrator | 2025-09-19 11:21:11.924641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:11.924654 | orchestrator | Friday 19 September 2025 11:21:05 +0000 (0:00:00.221) 0:00:16.712 ****** 2025-09-19 11:21:11.924666 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.924679 | orchestrator | 2025-09-19 11:21:11.924691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:11.924703 | orchestrator | Friday 19 September 2025 11:21:05 +0000 (0:00:00.213) 0:00:16.926 ****** 2025-09-19 11:21:11.924714 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.924726 | orchestrator | 2025-09-19 11:21:11.924739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:11.924751 | orchestrator | Friday 19 September 2025 11:21:05 +0000 (0:00:00.569) 0:00:17.496 ****** 2025-09-19 11:21:11.924763 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.924775 | orchestrator | 2025-09-19 11:21:11.924788 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:11.924800 | orchestrator | Friday 19 September 2025 11:21:06 +0000 (0:00:00.224) 0:00:17.720 ****** 2025-09-19 11:21:11.924812 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.924824 | orchestrator | 2025-09-19 11:21:11.924837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:11.924850 | orchestrator | Friday 19 September 2025 11:21:06 +0000 (0:00:00.196) 0:00:17.917 ****** 2025-09-19 11:21:11.924862 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2) 2025-09-19 11:21:11.924875 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2) 2025-09-19 11:21:11.924888 | orchestrator | 2025-09-19 11:21:11.924899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:11.924910 | orchestrator | Friday 19 September 2025 11:21:06 +0000 (0:00:00.407) 0:00:18.324 ****** 2025-09-19 11:21:11.924921 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a7da52da-8ff9-443f-9c01-2997209c642a) 2025-09-19 11:21:11.924956 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a7da52da-8ff9-443f-9c01-2997209c642a) 2025-09-19 11:21:11.924969 | orchestrator | 2025-09-19 11:21:11.924980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:11.924991 | orchestrator | Friday 19 September 2025 11:21:07 +0000 (0:00:00.422) 0:00:18.746 ****** 2025-09-19 11:21:11.925002 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2d05b72c-4493-4412-ad25-c0b6cbf3de12) 2025-09-19 11:21:11.925012 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2d05b72c-4493-4412-ad25-c0b6cbf3de12) 2025-09-19 11:21:11.925023 | orchestrator | 2025-09-19 11:21:11.925034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:11.925045 | orchestrator | Friday 19 September 2025 11:21:07 +0000 (0:00:00.397) 0:00:19.144 ****** 2025-09-19 11:21:11.925074 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c) 2025-09-19 11:21:11.925085 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c) 2025-09-19 11:21:11.925096 | orchestrator | 2025-09-19 11:21:11.925107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:11.925117 | orchestrator | Friday 19 September 2025 11:21:07 +0000 (0:00:00.437) 0:00:19.581 ****** 2025-09-19 11:21:11.925128 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 11:21:11.925138 | orchestrator | 2025-09-19 11:21:11.925149 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:11.925168 | orchestrator | Friday 19 September 2025 11:21:08 +0000 (0:00:00.324) 0:00:19.905 ****** 2025-09-19 11:21:11.925179 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-19 11:21:11.925189 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-19 11:21:11.925208 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-19 11:21:11.925219 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-19 11:21:11.925229 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-19 11:21:11.925239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-19 11:21:11.925250 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-19 11:21:11.925260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-19 11:21:11.925271 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-19 11:21:11.925281 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-19 11:21:11.925292 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-19 11:21:11.925302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-19 11:21:11.925313 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-19 11:21:11.925323 | orchestrator | 2025-09-19 11:21:11.925334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:11.925344 | orchestrator | Friday 19 September 2025 11:21:08 +0000 (0:00:00.391) 0:00:20.297 ****** 2025-09-19 11:21:11.925355 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.925365 | orchestrator | 2025-09-19 11:21:11.925376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:11.925386 | orchestrator | Friday 19 September 2025 11:21:08 +0000 (0:00:00.188) 0:00:20.486 ****** 2025-09-19 11:21:11.925396 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.925407 | orchestrator | 2025-09-19 11:21:11.925417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:11.925428 | orchestrator | Friday 19 September 2025 11:21:09 +0000 (0:00:00.692) 0:00:21.179 ****** 2025-09-19 11:21:11.925438 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.925449 | orchestrator | 2025-09-19 11:21:11.925459 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:11.925470 | orchestrator | Friday 19 September 2025 11:21:09 +0000 (0:00:00.241) 0:00:21.421 ****** 2025-09-19 11:21:11.925481 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.925491 | orchestrator | 2025-09-19 11:21:11.925502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:11.925520 | orchestrator | Friday 19 September 2025 11:21:10 +0000 (0:00:00.218) 0:00:21.639 ****** 2025-09-19 11:21:11.925538 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.925556 | orchestrator | 2025-09-19 11:21:11.925575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:11.925593 | orchestrator | Friday 19 September 2025 11:21:10 +0000 (0:00:00.277) 0:00:21.917 ****** 2025-09-19 11:21:11.925613 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.925631 | orchestrator | 2025-09-19 11:21:11.925649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:11.925660 | orchestrator | Friday 19 September 2025 11:21:10 +0000 (0:00:00.213) 0:00:22.131 ****** 2025-09-19 11:21:11.925670 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.925681 | orchestrator | 2025-09-19 11:21:11.925691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:11.925702 | orchestrator | Friday 19 September 2025 11:21:10 +0000 (0:00:00.198) 0:00:22.330 ****** 2025-09-19 11:21:11.925712 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.925723 | orchestrator | 2025-09-19 11:21:11.925734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:11.925754 | orchestrator | Friday 19 September 2025 11:21:11 +0000 (0:00:00.312) 0:00:22.642 ****** 2025-09-19 11:21:11.925764 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-19 11:21:11.925775 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-19 11:21:11.925786 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-19 11:21:11.925797 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-19 11:21:11.925807 | orchestrator | 2025-09-19 11:21:11.925818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:11.925829 | orchestrator | Friday 19 September 2025 11:21:11 +0000 (0:00:00.670) 0:00:23.312 ****** 2025-09-19 11:21:11.925839 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:11.925850 | orchestrator | 2025-09-19 11:21:11.925868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:18.003201 | orchestrator | Friday 19 September 2025 11:21:11 +0000 (0:00:00.199) 0:00:23.512 ****** 2025-09-19 11:21:18.003289 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.003304 | orchestrator | 2025-09-19 11:21:18.003317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:18.003329 | orchestrator | Friday 19 September 2025 11:21:12 +0000 (0:00:00.173) 0:00:23.686 ****** 2025-09-19 11:21:18.003340 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.003351 | orchestrator | 2025-09-19 11:21:18.003361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:18.003372 | orchestrator | Friday 19 September 2025 11:21:12 +0000 (0:00:00.186) 0:00:23.872 ****** 2025-09-19 11:21:18.003383 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.003393 | orchestrator | 2025-09-19 11:21:18.003420 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 11:21:18.003431 | orchestrator | Friday 19 September 2025 11:21:12 +0000 (0:00:00.185) 0:00:24.058 ****** 2025-09-19 11:21:18.003442 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-19 11:21:18.003452 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-19 11:21:18.003463 | orchestrator | 2025-09-19 11:21:18.003474 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 11:21:18.003485 | orchestrator | Friday 19 September 2025 11:21:12 +0000 (0:00:00.366) 0:00:24.424 ****** 2025-09-19 11:21:18.003495 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.003506 | orchestrator | 2025-09-19 11:21:18.003517 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 11:21:18.003527 | orchestrator | Friday 19 September 2025 11:21:12 +0000 (0:00:00.127) 0:00:24.552 ****** 2025-09-19 11:21:18.003538 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.003549 | orchestrator | 2025-09-19 11:21:18.003560 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 11:21:18.003571 | orchestrator | Friday 19 September 2025 11:21:13 +0000 (0:00:00.124) 0:00:24.677 ****** 2025-09-19 11:21:18.003581 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.003592 | orchestrator | 2025-09-19 11:21:18.003603 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 11:21:18.003613 | orchestrator | Friday 19 September 2025 11:21:13 +0000 (0:00:00.136) 0:00:24.813 ****** 2025-09-19 11:21:18.003624 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:21:18.003635 | orchestrator | 2025-09-19 11:21:18.003646 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 11:21:18.003656 | orchestrator | Friday 19 September 2025 11:21:13 +0000 (0:00:00.135) 0:00:24.949 ****** 2025-09-19 11:21:18.003667 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '499bb3ba-5d36-55d4-9ab4-77fea8769c5a'}}) 2025-09-19 11:21:18.003678 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '482defc3-95b3-50a2-a4e9-5dea1f7a25a6'}}) 2025-09-19 11:21:18.003689 | orchestrator | 2025-09-19 11:21:18.003700 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 11:21:18.003731 | orchestrator | Friday 19 September 2025 11:21:13 +0000 (0:00:00.182) 0:00:25.131 ****** 2025-09-19 11:21:18.003743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '499bb3ba-5d36-55d4-9ab4-77fea8769c5a'}})  2025-09-19 11:21:18.003755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '482defc3-95b3-50a2-a4e9-5dea1f7a25a6'}})  2025-09-19 11:21:18.003768 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.003780 | orchestrator | 2025-09-19 11:21:18.003793 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 11:21:18.003805 | orchestrator | Friday 19 September 2025 11:21:13 +0000 (0:00:00.161) 0:00:25.293 ****** 2025-09-19 11:21:18.003817 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '499bb3ba-5d36-55d4-9ab4-77fea8769c5a'}})  2025-09-19 11:21:18.003829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '482defc3-95b3-50a2-a4e9-5dea1f7a25a6'}})  2025-09-19 11:21:18.003841 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.003854 | orchestrator | 2025-09-19 11:21:18.003866 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 11:21:18.003878 | orchestrator | Friday 19 September 2025 11:21:13 +0000 (0:00:00.169) 0:00:25.462 ****** 2025-09-19 11:21:18.003891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '499bb3ba-5d36-55d4-9ab4-77fea8769c5a'}})  2025-09-19 11:21:18.003904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '482defc3-95b3-50a2-a4e9-5dea1f7a25a6'}})  2025-09-19 11:21:18.003916 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.003961 | orchestrator | 2025-09-19 11:21:18.003974 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 11:21:18.003987 | orchestrator | Friday 19 September 2025 11:21:14 +0000 (0:00:00.176) 0:00:25.639 ****** 2025-09-19 11:21:18.003999 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:21:18.004012 | orchestrator | 2025-09-19 11:21:18.004024 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 11:21:18.004037 | orchestrator | Friday 19 September 2025 11:21:14 +0000 (0:00:00.148) 0:00:25.788 ****** 2025-09-19 11:21:18.004048 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:21:18.004061 | orchestrator | 2025-09-19 11:21:18.004073 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 11:21:18.004085 | orchestrator | Friday 19 September 2025 11:21:14 +0000 (0:00:00.139) 0:00:25.927 ****** 2025-09-19 11:21:18.004097 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.004109 | orchestrator | 2025-09-19 11:21:18.004137 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 11:21:18.004149 | orchestrator | Friday 19 September 2025 11:21:14 +0000 (0:00:00.126) 0:00:26.054 ****** 2025-09-19 11:21:18.004160 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.004171 | orchestrator | 2025-09-19 11:21:18.004182 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 11:21:18.004192 | orchestrator | Friday 19 September 2025 11:21:14 +0000 (0:00:00.363) 0:00:26.417 ****** 2025-09-19 11:21:18.004203 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.004213 | orchestrator | 2025-09-19 11:21:18.004224 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 11:21:18.004235 | orchestrator | Friday 19 September 2025 11:21:14 +0000 (0:00:00.136) 0:00:26.554 ****** 2025-09-19 11:21:18.004245 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:21:18.004256 | orchestrator |  "ceph_osd_devices": { 2025-09-19 11:21:18.004267 | orchestrator |  "sdb": { 2025-09-19 11:21:18.004278 | orchestrator |  "osd_lvm_uuid": "499bb3ba-5d36-55d4-9ab4-77fea8769c5a" 2025-09-19 11:21:18.004289 | orchestrator |  }, 2025-09-19 11:21:18.004300 | orchestrator |  "sdc": { 2025-09-19 11:21:18.004311 | orchestrator |  "osd_lvm_uuid": "482defc3-95b3-50a2-a4e9-5dea1f7a25a6" 2025-09-19 11:21:18.004329 | orchestrator |  } 2025-09-19 11:21:18.004340 | orchestrator |  } 2025-09-19 11:21:18.004351 | orchestrator | } 2025-09-19 11:21:18.004362 | orchestrator | 2025-09-19 11:21:18.004373 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 11:21:18.004384 | orchestrator | Friday 19 September 2025 11:21:15 +0000 (0:00:00.139) 0:00:26.694 ****** 2025-09-19 11:21:18.004394 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.004405 | orchestrator | 2025-09-19 11:21:18.004422 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 11:21:18.004433 | orchestrator | Friday 19 September 2025 11:21:15 +0000 (0:00:00.128) 0:00:26.823 ****** 2025-09-19 11:21:18.004444 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.004454 | orchestrator | 2025-09-19 11:21:18.004465 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 11:21:18.004476 | orchestrator | Friday 19 September 2025 11:21:15 +0000 (0:00:00.129) 0:00:26.952 ****** 2025-09-19 11:21:18.004486 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:21:18.004497 | orchestrator | 2025-09-19 11:21:18.004508 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 11:21:18.004518 | orchestrator | Friday 19 September 2025 11:21:15 +0000 (0:00:00.133) 0:00:27.086 ****** 2025-09-19 11:21:18.004529 | orchestrator | changed: [testbed-node-4] => { 2025-09-19 11:21:18.004540 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 11:21:18.004550 | orchestrator |  "ceph_osd_devices": { 2025-09-19 11:21:18.004561 | orchestrator |  "sdb": { 2025-09-19 11:21:18.004572 | orchestrator |  "osd_lvm_uuid": "499bb3ba-5d36-55d4-9ab4-77fea8769c5a" 2025-09-19 11:21:18.004583 | orchestrator |  }, 2025-09-19 11:21:18.004598 | orchestrator |  "sdc": { 2025-09-19 11:21:18.004609 | orchestrator |  "osd_lvm_uuid": "482defc3-95b3-50a2-a4e9-5dea1f7a25a6" 2025-09-19 11:21:18.004620 | orchestrator |  } 2025-09-19 11:21:18.004631 | orchestrator |  }, 2025-09-19 11:21:18.004642 | orchestrator |  "lvm_volumes": [ 2025-09-19 11:21:18.004652 | orchestrator |  { 2025-09-19 11:21:18.004663 | orchestrator |  "data": "osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a", 2025-09-19 11:21:18.004674 | orchestrator |  "data_vg": "ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a" 2025-09-19 11:21:18.004685 | orchestrator |  }, 2025-09-19 11:21:18.004695 | orchestrator |  { 2025-09-19 11:21:18.004706 | orchestrator |  "data": "osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6", 2025-09-19 11:21:18.004717 | orchestrator |  "data_vg": "ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6" 2025-09-19 11:21:18.004728 | orchestrator |  } 2025-09-19 11:21:18.004739 | orchestrator |  ] 2025-09-19 11:21:18.004749 | orchestrator |  } 2025-09-19 11:21:18.004760 | orchestrator | } 2025-09-19 11:21:18.004771 | orchestrator | 2025-09-19 11:21:18.004782 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 11:21:18.004793 | orchestrator | Friday 19 September 2025 11:21:15 +0000 (0:00:00.204) 0:00:27.290 ****** 2025-09-19 11:21:18.004803 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 11:21:18.004814 | orchestrator | 2025-09-19 11:21:18.004825 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 11:21:18.004836 | orchestrator | 2025-09-19 11:21:18.004846 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 11:21:18.004857 | orchestrator | Friday 19 September 2025 11:21:16 +0000 (0:00:01.077) 0:00:28.367 ****** 2025-09-19 11:21:18.004868 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 11:21:18.004878 | orchestrator | 2025-09-19 11:21:18.004889 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 11:21:18.004899 | orchestrator | Friday 19 September 2025 11:21:17 +0000 (0:00:00.418) 0:00:28.786 ****** 2025-09-19 11:21:18.004910 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:21:18.004945 | orchestrator | 2025-09-19 11:21:18.004957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:18.004968 | orchestrator | Friday 19 September 2025 11:21:17 +0000 (0:00:00.476) 0:00:29.263 ****** 2025-09-19 11:21:18.004978 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-19 11:21:18.004989 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-19 11:21:18.005000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-19 11:21:18.005011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-19 11:21:18.005022 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-19 11:21:18.005032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-19 11:21:18.005049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-19 11:21:25.156181 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-19 11:21:25.156274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-19 11:21:25.156289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-19 11:21:25.156300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-19 11:21:25.156311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-19 11:21:25.156321 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-19 11:21:25.156333 | orchestrator | 2025-09-19 11:21:25.156345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:25.156357 | orchestrator | Friday 19 September 2025 11:21:17 +0000 (0:00:00.327) 0:00:29.590 ****** 2025-09-19 11:21:25.156368 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.156379 | orchestrator | 2025-09-19 11:21:25.156390 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:25.156401 | orchestrator | Friday 19 September 2025 11:21:18 +0000 (0:00:00.177) 0:00:29.768 ****** 2025-09-19 11:21:25.156412 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.156423 | orchestrator | 2025-09-19 11:21:25.156433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:25.156444 | orchestrator | Friday 19 September 2025 11:21:18 +0000 (0:00:00.179) 0:00:29.947 ****** 2025-09-19 11:21:25.156455 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.156466 | orchestrator | 2025-09-19 11:21:25.156477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:25.156487 | orchestrator | Friday 19 September 2025 11:21:18 +0000 (0:00:00.144) 0:00:30.092 ****** 2025-09-19 11:21:25.156498 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.156509 | orchestrator | 2025-09-19 11:21:25.156520 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:25.156531 | orchestrator | Friday 19 September 2025 11:21:18 +0000 (0:00:00.186) 0:00:30.279 ****** 2025-09-19 11:21:25.156541 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.156552 | orchestrator | 2025-09-19 11:21:25.156563 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:25.156573 | orchestrator | Friday 19 September 2025 11:21:18 +0000 (0:00:00.207) 0:00:30.487 ****** 2025-09-19 11:21:25.156584 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.156595 | orchestrator | 2025-09-19 11:21:25.156606 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:25.156617 | orchestrator | Friday 19 September 2025 11:21:19 +0000 (0:00:00.158) 0:00:30.645 ****** 2025-09-19 11:21:25.156628 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.156639 | orchestrator | 2025-09-19 11:21:25.156671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:25.156683 | orchestrator | Friday 19 September 2025 11:21:19 +0000 (0:00:00.181) 0:00:30.826 ****** 2025-09-19 11:21:25.156694 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.156705 | orchestrator | 2025-09-19 11:21:25.156730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:25.156743 | orchestrator | Friday 19 September 2025 11:21:19 +0000 (0:00:00.206) 0:00:31.033 ****** 2025-09-19 11:21:25.156756 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d) 2025-09-19 11:21:25.156770 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d) 2025-09-19 11:21:25.156782 | orchestrator | 2025-09-19 11:21:25.156795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:25.156807 | orchestrator | Friday 19 September 2025 11:21:19 +0000 (0:00:00.520) 0:00:31.554 ****** 2025-09-19 11:21:25.156819 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4ab3eba9-7f04-4545-b862-1d19a7d78b14) 2025-09-19 11:21:25.156831 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4ab3eba9-7f04-4545-b862-1d19a7d78b14) 2025-09-19 11:21:25.156844 | orchestrator | 2025-09-19 11:21:25.156856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:25.156869 | orchestrator | Friday 19 September 2025 11:21:20 +0000 (0:00:00.610) 0:00:32.165 ****** 2025-09-19 11:21:25.156880 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_82c12b62-ffbd-484b-a107-b043e35ec15c) 2025-09-19 11:21:25.156892 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_82c12b62-ffbd-484b-a107-b043e35ec15c) 2025-09-19 11:21:25.156904 | orchestrator | 2025-09-19 11:21:25.156916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:25.156949 | orchestrator | Friday 19 September 2025 11:21:21 +0000 (0:00:00.441) 0:00:32.606 ****** 2025-09-19 11:21:25.156961 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_23c8bdec-2f7a-480a-98d1-592cee3b582b) 2025-09-19 11:21:25.156973 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_23c8bdec-2f7a-480a-98d1-592cee3b582b) 2025-09-19 11:21:25.156985 | orchestrator | 2025-09-19 11:21:25.156998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:21:25.157010 | orchestrator | Friday 19 September 2025 11:21:21 +0000 (0:00:00.395) 0:00:33.002 ****** 2025-09-19 11:21:25.157022 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 11:21:25.157034 | orchestrator | 2025-09-19 11:21:25.157046 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157057 | orchestrator | Friday 19 September 2025 11:21:21 +0000 (0:00:00.327) 0:00:33.329 ****** 2025-09-19 11:21:25.157082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-19 11:21:25.157094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-19 11:21:25.157105 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-19 11:21:25.157116 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-19 11:21:25.157126 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-19 11:21:25.157137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-19 11:21:25.157148 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-19 11:21:25.157158 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-19 11:21:25.157169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-19 11:21:25.157190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-19 11:21:25.157201 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-19 11:21:25.157211 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-19 11:21:25.157222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-19 11:21:25.157233 | orchestrator | 2025-09-19 11:21:25.157244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157255 | orchestrator | Friday 19 September 2025 11:21:22 +0000 (0:00:00.358) 0:00:33.687 ****** 2025-09-19 11:21:25.157265 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.157276 | orchestrator | 2025-09-19 11:21:25.157287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157298 | orchestrator | Friday 19 September 2025 11:21:22 +0000 (0:00:00.212) 0:00:33.900 ****** 2025-09-19 11:21:25.157309 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.157319 | orchestrator | 2025-09-19 11:21:25.157331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157341 | orchestrator | Friday 19 September 2025 11:21:22 +0000 (0:00:00.174) 0:00:34.074 ****** 2025-09-19 11:21:25.157352 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.157363 | orchestrator | 2025-09-19 11:21:25.157374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157385 | orchestrator | Friday 19 September 2025 11:21:22 +0000 (0:00:00.183) 0:00:34.257 ****** 2025-09-19 11:21:25.157395 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.157406 | orchestrator | 2025-09-19 11:21:25.157417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157428 | orchestrator | Friday 19 September 2025 11:21:22 +0000 (0:00:00.194) 0:00:34.452 ****** 2025-09-19 11:21:25.157439 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.157449 | orchestrator | 2025-09-19 11:21:25.157460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157471 | orchestrator | Friday 19 September 2025 11:21:23 +0000 (0:00:00.180) 0:00:34.632 ****** 2025-09-19 11:21:25.157481 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.157492 | orchestrator | 2025-09-19 11:21:25.157503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157514 | orchestrator | Friday 19 September 2025 11:21:23 +0000 (0:00:00.458) 0:00:35.091 ****** 2025-09-19 11:21:25.157524 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.157535 | orchestrator | 2025-09-19 11:21:25.157546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157557 | orchestrator | Friday 19 September 2025 11:21:23 +0000 (0:00:00.178) 0:00:35.269 ****** 2025-09-19 11:21:25.157567 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.157578 | orchestrator | 2025-09-19 11:21:25.157589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157600 | orchestrator | Friday 19 September 2025 11:21:23 +0000 (0:00:00.191) 0:00:35.461 ****** 2025-09-19 11:21:25.157611 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-19 11:21:25.157621 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-19 11:21:25.157632 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-19 11:21:25.157643 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-19 11:21:25.157654 | orchestrator | 2025-09-19 11:21:25.157665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157676 | orchestrator | Friday 19 September 2025 11:21:24 +0000 (0:00:00.508) 0:00:35.970 ****** 2025-09-19 11:21:25.157687 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.157698 | orchestrator | 2025-09-19 11:21:25.157708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157719 | orchestrator | Friday 19 September 2025 11:21:24 +0000 (0:00:00.198) 0:00:36.169 ****** 2025-09-19 11:21:25.157736 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.157747 | orchestrator | 2025-09-19 11:21:25.157758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157769 | orchestrator | Friday 19 September 2025 11:21:24 +0000 (0:00:00.198) 0:00:36.367 ****** 2025-09-19 11:21:25.157780 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.157791 | orchestrator | 2025-09-19 11:21:25.157801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:21:25.157812 | orchestrator | Friday 19 September 2025 11:21:24 +0000 (0:00:00.198) 0:00:36.565 ****** 2025-09-19 11:21:25.157828 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:25.157839 | orchestrator | 2025-09-19 11:21:25.157851 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 11:21:25.157866 | orchestrator | Friday 19 September 2025 11:21:25 +0000 (0:00:00.182) 0:00:36.748 ****** 2025-09-19 11:21:29.345238 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-19 11:21:29.345325 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-19 11:21:29.345339 | orchestrator | 2025-09-19 11:21:29.345351 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 11:21:29.345362 | orchestrator | Friday 19 September 2025 11:21:25 +0000 (0:00:00.147) 0:00:36.895 ****** 2025-09-19 11:21:29.345373 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:29.345384 | orchestrator | 2025-09-19 11:21:29.345395 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 11:21:29.345406 | orchestrator | Friday 19 September 2025 11:21:25 +0000 (0:00:00.114) 0:00:37.010 ****** 2025-09-19 11:21:29.345416 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:29.345427 | orchestrator | 2025-09-19 11:21:29.345438 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 11:21:29.345448 | orchestrator | Friday 19 September 2025 11:21:25 +0000 (0:00:00.137) 0:00:37.147 ****** 2025-09-19 11:21:29.345459 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:29.345469 | orchestrator | 2025-09-19 11:21:29.345480 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 11:21:29.345491 | orchestrator | Friday 19 September 2025 11:21:25 +0000 (0:00:00.108) 0:00:37.256 ****** 2025-09-19 11:21:29.345501 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:21:29.345512 | orchestrator | 2025-09-19 11:21:29.345523 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 11:21:29.345534 | orchestrator | Friday 19 September 2025 11:21:25 +0000 (0:00:00.271) 0:00:37.528 ****** 2025-09-19 11:21:29.345545 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'}}) 2025-09-19 11:21:29.345556 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9f018b0b-9dc8-5104-9bc9-2c288294c8fd'}}) 2025-09-19 11:21:29.345567 | orchestrator | 2025-09-19 11:21:29.345578 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 11:21:29.345589 | orchestrator | Friday 19 September 2025 11:21:26 +0000 (0:00:00.202) 0:00:37.730 ****** 2025-09-19 11:21:29.345600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'}})  2025-09-19 11:21:29.345611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9f018b0b-9dc8-5104-9bc9-2c288294c8fd'}})  2025-09-19 11:21:29.345622 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:29.345633 | orchestrator | 2025-09-19 11:21:29.345658 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 11:21:29.345670 | orchestrator | Friday 19 September 2025 11:21:26 +0000 (0:00:00.169) 0:00:37.899 ****** 2025-09-19 11:21:29.345680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'}})  2025-09-19 11:21:29.345691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9f018b0b-9dc8-5104-9bc9-2c288294c8fd'}})  2025-09-19 11:21:29.345723 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:29.345734 | orchestrator | 2025-09-19 11:21:29.345745 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 11:21:29.345756 | orchestrator | Friday 19 September 2025 11:21:26 +0000 (0:00:00.167) 0:00:38.067 ****** 2025-09-19 11:21:29.345767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'}})  2025-09-19 11:21:29.345777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9f018b0b-9dc8-5104-9bc9-2c288294c8fd'}})  2025-09-19 11:21:29.345788 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:29.345800 | orchestrator | 2025-09-19 11:21:29.345812 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 11:21:29.345825 | orchestrator | Friday 19 September 2025 11:21:26 +0000 (0:00:00.142) 0:00:38.209 ****** 2025-09-19 11:21:29.345838 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:21:29.345850 | orchestrator | 2025-09-19 11:21:29.345862 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 11:21:29.345873 | orchestrator | Friday 19 September 2025 11:21:26 +0000 (0:00:00.134) 0:00:38.344 ****** 2025-09-19 11:21:29.345886 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:21:29.345898 | orchestrator | 2025-09-19 11:21:29.345911 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 11:21:29.345948 | orchestrator | Friday 19 September 2025 11:21:26 +0000 (0:00:00.168) 0:00:38.512 ****** 2025-09-19 11:21:29.345961 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:29.345973 | orchestrator | 2025-09-19 11:21:29.345986 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 11:21:29.345998 | orchestrator | Friday 19 September 2025 11:21:27 +0000 (0:00:00.130) 0:00:38.643 ****** 2025-09-19 11:21:29.346010 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:29.346080 | orchestrator | 2025-09-19 11:21:29.346094 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 11:21:29.346106 | orchestrator | Friday 19 September 2025 11:21:27 +0000 (0:00:00.232) 0:00:38.876 ****** 2025-09-19 11:21:29.346118 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:29.346131 | orchestrator | 2025-09-19 11:21:29.346143 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 11:21:29.346154 | orchestrator | Friday 19 September 2025 11:21:27 +0000 (0:00:00.147) 0:00:39.023 ****** 2025-09-19 11:21:29.346165 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:21:29.346175 | orchestrator |  "ceph_osd_devices": { 2025-09-19 11:21:29.346186 | orchestrator |  "sdb": { 2025-09-19 11:21:29.346197 | orchestrator |  "osd_lvm_uuid": "4ec87955-83d4-5f81-a4e3-fa3184f5f6e6" 2025-09-19 11:21:29.346223 | orchestrator |  }, 2025-09-19 11:21:29.346236 | orchestrator |  "sdc": { 2025-09-19 11:21:29.346247 | orchestrator |  "osd_lvm_uuid": "9f018b0b-9dc8-5104-9bc9-2c288294c8fd" 2025-09-19 11:21:29.346257 | orchestrator |  } 2025-09-19 11:21:29.346268 | orchestrator |  } 2025-09-19 11:21:29.346279 | orchestrator | } 2025-09-19 11:21:29.346290 | orchestrator | 2025-09-19 11:21:29.346301 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 11:21:29.346312 | orchestrator | Friday 19 September 2025 11:21:27 +0000 (0:00:00.148) 0:00:39.171 ****** 2025-09-19 11:21:29.346323 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:29.346334 | orchestrator | 2025-09-19 11:21:29.346344 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 11:21:29.346355 | orchestrator | Friday 19 September 2025 11:21:27 +0000 (0:00:00.209) 0:00:39.381 ****** 2025-09-19 11:21:29.346366 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:29.346376 | orchestrator | 2025-09-19 11:21:29.346387 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 11:21:29.346435 | orchestrator | Friday 19 September 2025 11:21:28 +0000 (0:00:00.304) 0:00:39.685 ****** 2025-09-19 11:21:29.346446 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:21:29.346457 | orchestrator | 2025-09-19 11:21:29.346468 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 11:21:29.346478 | orchestrator | Friday 19 September 2025 11:21:28 +0000 (0:00:00.112) 0:00:39.798 ****** 2025-09-19 11:21:29.346489 | orchestrator | changed: [testbed-node-5] => { 2025-09-19 11:21:29.346499 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 11:21:29.346510 | orchestrator |  "ceph_osd_devices": { 2025-09-19 11:21:29.346521 | orchestrator |  "sdb": { 2025-09-19 11:21:29.346531 | orchestrator |  "osd_lvm_uuid": "4ec87955-83d4-5f81-a4e3-fa3184f5f6e6" 2025-09-19 11:21:29.346542 | orchestrator |  }, 2025-09-19 11:21:29.346553 | orchestrator |  "sdc": { 2025-09-19 11:21:29.346564 | orchestrator |  "osd_lvm_uuid": "9f018b0b-9dc8-5104-9bc9-2c288294c8fd" 2025-09-19 11:21:29.346574 | orchestrator |  } 2025-09-19 11:21:29.346585 | orchestrator |  }, 2025-09-19 11:21:29.346596 | orchestrator |  "lvm_volumes": [ 2025-09-19 11:21:29.346606 | orchestrator |  { 2025-09-19 11:21:29.346617 | orchestrator |  "data": "osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6", 2025-09-19 11:21:29.346628 | orchestrator |  "data_vg": "ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6" 2025-09-19 11:21:29.346638 | orchestrator |  }, 2025-09-19 11:21:29.346649 | orchestrator |  { 2025-09-19 11:21:29.346659 | orchestrator |  "data": "osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd", 2025-09-19 11:21:29.346670 | orchestrator |  "data_vg": "ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd" 2025-09-19 11:21:29.346681 | orchestrator |  } 2025-09-19 11:21:29.346692 | orchestrator |  ] 2025-09-19 11:21:29.346703 | orchestrator |  } 2025-09-19 11:21:29.346713 | orchestrator | } 2025-09-19 11:21:29.346728 | orchestrator | 2025-09-19 11:21:29.346739 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 11:21:29.346750 | orchestrator | Friday 19 September 2025 11:21:28 +0000 (0:00:00.199) 0:00:39.997 ****** 2025-09-19 11:21:29.346760 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 11:21:29.346771 | orchestrator | 2025-09-19 11:21:29.346781 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:21:29.346800 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 11:21:29.346811 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 11:21:29.346822 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 11:21:29.346833 | orchestrator | 2025-09-19 11:21:29.346844 | orchestrator | 2025-09-19 11:21:29.346854 | orchestrator | 2025-09-19 11:21:29.346865 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:21:29.346875 | orchestrator | Friday 19 September 2025 11:21:29 +0000 (0:00:00.932) 0:00:40.930 ****** 2025-09-19 11:21:29.346886 | orchestrator | =============================================================================== 2025-09-19 11:21:29.346897 | orchestrator | Write configuration file ------------------------------------------------ 4.18s 2025-09-19 11:21:29.346907 | orchestrator | Add known partitions to the list of available block devices ------------- 1.13s 2025-09-19 11:21:29.346952 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2025-09-19 11:21:29.346963 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2025-09-19 11:21:29.346974 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.92s 2025-09-19 11:21:29.346985 | orchestrator | Get initial list of available block devices ----------------------------- 0.92s 2025-09-19 11:21:29.347003 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2025-09-19 11:21:29.347014 | orchestrator | Set WAL devices config data --------------------------------------------- 0.73s 2025-09-19 11:21:29.347024 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.71s 2025-09-19 11:21:29.347035 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2025-09-19 11:21:29.347045 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.68s 2025-09-19 11:21:29.347056 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-09-19 11:21:29.347066 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-09-19 11:21:29.347077 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-19 11:21:29.347094 | orchestrator | Print configuration data ------------------------------------------------ 0.61s 2025-09-19 11:21:29.589908 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-19 11:21:29.590082 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2025-09-19 11:21:29.590098 | orchestrator | Print DB devices -------------------------------------------------------- 0.57s 2025-09-19 11:21:29.590109 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.56s 2025-09-19 11:21:29.590120 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.55s 2025-09-19 11:21:52.198119 | orchestrator | 2025-09-19 11:21:52 | INFO  | Task 55d622c8-3c9d-41de-b03c-79803f4fe6c6 (sync inventory) is running in background. Output coming soon. 2025-09-19 11:22:10.857328 | orchestrator | 2025-09-19 11:21:53 | INFO  | Starting group_vars file reorganization 2025-09-19 11:22:10.857434 | orchestrator | 2025-09-19 11:21:53 | INFO  | Moved 0 file(s) to their respective directories 2025-09-19 11:22:10.857448 | orchestrator | 2025-09-19 11:21:53 | INFO  | Group_vars file reorganization completed 2025-09-19 11:22:10.857459 | orchestrator | 2025-09-19 11:21:55 | INFO  | Starting variable preparation from inventory 2025-09-19 11:22:10.857469 | orchestrator | 2025-09-19 11:21:56 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-19 11:22:10.857479 | orchestrator | 2025-09-19 11:21:56 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-19 11:22:10.857489 | orchestrator | 2025-09-19 11:21:56 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-19 11:22:10.857498 | orchestrator | 2025-09-19 11:21:56 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-19 11:22:10.857508 | orchestrator | 2025-09-19 11:21:56 | INFO  | Variable preparation completed 2025-09-19 11:22:10.857517 | orchestrator | 2025-09-19 11:21:58 | INFO  | Starting inventory overwrite handling 2025-09-19 11:22:10.857526 | orchestrator | 2025-09-19 11:21:58 | INFO  | Handling group overwrites in 99-overwrite 2025-09-19 11:22:10.857536 | orchestrator | 2025-09-19 11:21:58 | INFO  | Removing group frr:children from 60-generic 2025-09-19 11:22:10.857546 | orchestrator | 2025-09-19 11:21:58 | INFO  | Removing group storage:children from 50-kolla 2025-09-19 11:22:10.857556 | orchestrator | 2025-09-19 11:21:58 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-19 11:22:10.857565 | orchestrator | 2025-09-19 11:21:58 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-19 11:22:10.857576 | orchestrator | 2025-09-19 11:21:58 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-19 11:22:10.857585 | orchestrator | 2025-09-19 11:21:58 | INFO  | Handling group overwrites in 20-roles 2025-09-19 11:22:10.857595 | orchestrator | 2025-09-19 11:21:58 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-19 11:22:10.857630 | orchestrator | 2025-09-19 11:21:58 | INFO  | Removed 6 group(s) in total 2025-09-19 11:22:10.857640 | orchestrator | 2025-09-19 11:21:58 | INFO  | Inventory overwrite handling completed 2025-09-19 11:22:10.857649 | orchestrator | 2025-09-19 11:21:58 | INFO  | Starting merge of inventory files 2025-09-19 11:22:10.857658 | orchestrator | 2025-09-19 11:21:58 | INFO  | Inventory files merged successfully 2025-09-19 11:22:10.857668 | orchestrator | 2025-09-19 11:22:02 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-19 11:22:10.857677 | orchestrator | 2025-09-19 11:22:09 | INFO  | Successfully wrote ClusterShell configuration 2025-09-19 11:22:10.857687 | orchestrator | [master 5c120a9] 2025-09-19-11-22 2025-09-19 11:22:10.857698 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-19 11:22:12.788135 | orchestrator | 2025-09-19 11:22:12 | INFO  | Task 65f5114d-f82c-4d09-b88e-a5ac977eae83 (ceph-create-lvm-devices) was prepared for execution. 2025-09-19 11:22:12.788214 | orchestrator | 2025-09-19 11:22:12 | INFO  | It takes a moment until task 65f5114d-f82c-4d09-b88e-a5ac977eae83 (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-19 11:22:23.008667 | orchestrator | 2025-09-19 11:22:23.008757 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 11:22:23.008773 | orchestrator | 2025-09-19 11:22:23.008785 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 11:22:23.008797 | orchestrator | Friday 19 September 2025 11:22:15 +0000 (0:00:00.262) 0:00:00.262 ****** 2025-09-19 11:22:23.008808 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:22:23.008819 | orchestrator | 2025-09-19 11:22:23.008830 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 11:22:23.008840 | orchestrator | Friday 19 September 2025 11:22:16 +0000 (0:00:00.205) 0:00:00.467 ****** 2025-09-19 11:22:23.008851 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:22:23.008863 | orchestrator | 2025-09-19 11:22:23.008920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.008932 | orchestrator | Friday 19 September 2025 11:22:16 +0000 (0:00:00.197) 0:00:00.665 ****** 2025-09-19 11:22:23.008942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-19 11:22:23.008954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-19 11:22:23.008965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-19 11:22:23.008977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-19 11:22:23.008987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-19 11:22:23.008998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-19 11:22:23.009009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-19 11:22:23.009020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-19 11:22:23.009030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-19 11:22:23.009041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-19 11:22:23.009052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-19 11:22:23.009062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-19 11:22:23.009073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-19 11:22:23.009084 | orchestrator | 2025-09-19 11:22:23.009094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.009145 | orchestrator | Friday 19 September 2025 11:22:16 +0000 (0:00:00.395) 0:00:01.060 ****** 2025-09-19 11:22:23.009158 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.009169 | orchestrator | 2025-09-19 11:22:23.009180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.009207 | orchestrator | Friday 19 September 2025 11:22:17 +0000 (0:00:00.352) 0:00:01.412 ****** 2025-09-19 11:22:23.009219 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.009232 | orchestrator | 2025-09-19 11:22:23.009244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.009256 | orchestrator | Friday 19 September 2025 11:22:17 +0000 (0:00:00.181) 0:00:01.594 ****** 2025-09-19 11:22:23.009269 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.009281 | orchestrator | 2025-09-19 11:22:23.009300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.009312 | orchestrator | Friday 19 September 2025 11:22:17 +0000 (0:00:00.165) 0:00:01.759 ****** 2025-09-19 11:22:23.009325 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.009337 | orchestrator | 2025-09-19 11:22:23.009350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.009363 | orchestrator | Friday 19 September 2025 11:22:17 +0000 (0:00:00.176) 0:00:01.936 ****** 2025-09-19 11:22:23.009376 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.009388 | orchestrator | 2025-09-19 11:22:23.009401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.009413 | orchestrator | Friday 19 September 2025 11:22:17 +0000 (0:00:00.178) 0:00:02.115 ****** 2025-09-19 11:22:23.009426 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.009438 | orchestrator | 2025-09-19 11:22:23.009451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.009463 | orchestrator | Friday 19 September 2025 11:22:17 +0000 (0:00:00.202) 0:00:02.317 ****** 2025-09-19 11:22:23.009475 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.009488 | orchestrator | 2025-09-19 11:22:23.009501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.009513 | orchestrator | Friday 19 September 2025 11:22:18 +0000 (0:00:00.192) 0:00:02.510 ****** 2025-09-19 11:22:23.009525 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.009537 | orchestrator | 2025-09-19 11:22:23.009550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.009563 | orchestrator | Friday 19 September 2025 11:22:18 +0000 (0:00:00.176) 0:00:02.687 ****** 2025-09-19 11:22:23.009575 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10) 2025-09-19 11:22:23.009586 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10) 2025-09-19 11:22:23.009597 | orchestrator | 2025-09-19 11:22:23.009608 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.009619 | orchestrator | Friday 19 September 2025 11:22:18 +0000 (0:00:00.395) 0:00:03.082 ****** 2025-09-19 11:22:23.009645 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_729b54dd-f4c1-4a98-9e39-7aa2dbdf058c) 2025-09-19 11:22:23.009657 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_729b54dd-f4c1-4a98-9e39-7aa2dbdf058c) 2025-09-19 11:22:23.009668 | orchestrator | 2025-09-19 11:22:23.009679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.009690 | orchestrator | Friday 19 September 2025 11:22:19 +0000 (0:00:00.482) 0:00:03.565 ****** 2025-09-19 11:22:23.009701 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ff354216-c1d2-4110-b9e3-f4cf06b21a62) 2025-09-19 11:22:23.009712 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ff354216-c1d2-4110-b9e3-f4cf06b21a62) 2025-09-19 11:22:23.009723 | orchestrator | 2025-09-19 11:22:23.009733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.009752 | orchestrator | Friday 19 September 2025 11:22:19 +0000 (0:00:00.570) 0:00:04.136 ****** 2025-09-19 11:22:23.009763 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2859ea6e-5cf3-4595-8353-f67711d21d4e) 2025-09-19 11:22:23.009773 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2859ea6e-5cf3-4595-8353-f67711d21d4e) 2025-09-19 11:22:23.009784 | orchestrator | 2025-09-19 11:22:23.009795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:23.009806 | orchestrator | Friday 19 September 2025 11:22:20 +0000 (0:00:00.564) 0:00:04.700 ****** 2025-09-19 11:22:23.009817 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 11:22:23.009827 | orchestrator | 2025-09-19 11:22:23.009838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:23.009849 | orchestrator | Friday 19 September 2025 11:22:20 +0000 (0:00:00.631) 0:00:05.332 ****** 2025-09-19 11:22:23.009859 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-19 11:22:23.009890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-19 11:22:23.009901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-19 11:22:23.009912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-19 11:22:23.009922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-19 11:22:23.009933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-19 11:22:23.009944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-19 11:22:23.009954 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-19 11:22:23.009965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-19 11:22:23.009976 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-19 11:22:23.009987 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-19 11:22:23.009998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-19 11:22:23.010008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-19 11:22:23.010073 | orchestrator | 2025-09-19 11:22:23.010086 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:23.010097 | orchestrator | Friday 19 September 2025 11:22:21 +0000 (0:00:00.390) 0:00:05.723 ****** 2025-09-19 11:22:23.010108 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.010119 | orchestrator | 2025-09-19 11:22:23.010130 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:23.010141 | orchestrator | Friday 19 September 2025 11:22:21 +0000 (0:00:00.202) 0:00:05.925 ****** 2025-09-19 11:22:23.010151 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.010162 | orchestrator | 2025-09-19 11:22:23.010173 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:23.010183 | orchestrator | Friday 19 September 2025 11:22:21 +0000 (0:00:00.190) 0:00:06.116 ****** 2025-09-19 11:22:23.010194 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.010205 | orchestrator | 2025-09-19 11:22:23.010215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:23.010226 | orchestrator | Friday 19 September 2025 11:22:22 +0000 (0:00:00.249) 0:00:06.365 ****** 2025-09-19 11:22:23.010240 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.010258 | orchestrator | 2025-09-19 11:22:23.010277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:23.010295 | orchestrator | Friday 19 September 2025 11:22:22 +0000 (0:00:00.192) 0:00:06.557 ****** 2025-09-19 11:22:23.010323 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.010340 | orchestrator | 2025-09-19 11:22:23.010357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:23.010375 | orchestrator | Friday 19 September 2025 11:22:22 +0000 (0:00:00.238) 0:00:06.796 ****** 2025-09-19 11:22:23.010391 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.010408 | orchestrator | 2025-09-19 11:22:23.010426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:23.010444 | orchestrator | Friday 19 September 2025 11:22:22 +0000 (0:00:00.189) 0:00:06.986 ****** 2025-09-19 11:22:23.010462 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:23.010480 | orchestrator | 2025-09-19 11:22:23.010498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:23.010517 | orchestrator | Friday 19 September 2025 11:22:22 +0000 (0:00:00.172) 0:00:07.158 ****** 2025-09-19 11:22:23.010548 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.993032 | orchestrator | 2025-09-19 11:22:30.993162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:30.993182 | orchestrator | Friday 19 September 2025 11:22:23 +0000 (0:00:00.200) 0:00:07.359 ****** 2025-09-19 11:22:30.993194 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-19 11:22:30.993207 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-19 11:22:30.993218 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-19 11:22:30.993229 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-19 11:22:30.993240 | orchestrator | 2025-09-19 11:22:30.993252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:30.993263 | orchestrator | Friday 19 September 2025 11:22:23 +0000 (0:00:00.850) 0:00:08.209 ****** 2025-09-19 11:22:30.993274 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.993285 | orchestrator | 2025-09-19 11:22:30.993295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:30.993306 | orchestrator | Friday 19 September 2025 11:22:24 +0000 (0:00:00.183) 0:00:08.393 ****** 2025-09-19 11:22:30.993317 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.993328 | orchestrator | 2025-09-19 11:22:30.993339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:30.993350 | orchestrator | Friday 19 September 2025 11:22:24 +0000 (0:00:00.201) 0:00:08.595 ****** 2025-09-19 11:22:30.993360 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.993371 | orchestrator | 2025-09-19 11:22:30.993382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:30.993393 | orchestrator | Friday 19 September 2025 11:22:24 +0000 (0:00:00.208) 0:00:08.803 ****** 2025-09-19 11:22:30.993404 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.993415 | orchestrator | 2025-09-19 11:22:30.993426 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 11:22:30.993436 | orchestrator | Friday 19 September 2025 11:22:24 +0000 (0:00:00.196) 0:00:09.000 ****** 2025-09-19 11:22:30.993447 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.993458 | orchestrator | 2025-09-19 11:22:30.993469 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 11:22:30.993480 | orchestrator | Friday 19 September 2025 11:22:24 +0000 (0:00:00.126) 0:00:09.126 ****** 2025-09-19 11:22:30.993492 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f2e5a9ae-16db-5885-a5f1-5293896cd0a9'}}) 2025-09-19 11:22:30.993505 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'd15bf0b7-095a-52ef-97a5-c7d3cf055ef5'}}) 2025-09-19 11:22:30.993517 | orchestrator | 2025-09-19 11:22:30.993530 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 11:22:30.993543 | orchestrator | Friday 19 September 2025 11:22:24 +0000 (0:00:00.170) 0:00:09.296 ****** 2025-09-19 11:22:30.993557 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'}) 2025-09-19 11:22:30.993594 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'}) 2025-09-19 11:22:30.993607 | orchestrator | 2025-09-19 11:22:30.993637 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 11:22:30.993650 | orchestrator | Friday 19 September 2025 11:22:27 +0000 (0:00:02.065) 0:00:11.362 ****** 2025-09-19 11:22:30.993669 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:30.993683 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:30.993695 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.993707 | orchestrator | 2025-09-19 11:22:30.993719 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 11:22:30.993731 | orchestrator | Friday 19 September 2025 11:22:27 +0000 (0:00:00.158) 0:00:11.520 ****** 2025-09-19 11:22:30.993743 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'}) 2025-09-19 11:22:30.993755 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'}) 2025-09-19 11:22:30.993767 | orchestrator | 2025-09-19 11:22:30.993779 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 11:22:30.993791 | orchestrator | Friday 19 September 2025 11:22:28 +0000 (0:00:01.546) 0:00:13.066 ****** 2025-09-19 11:22:30.993803 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:30.993815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:30.993828 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.993840 | orchestrator | 2025-09-19 11:22:30.993852 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 11:22:30.993891 | orchestrator | Friday 19 September 2025 11:22:28 +0000 (0:00:00.140) 0:00:13.207 ****** 2025-09-19 11:22:30.993904 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.993915 | orchestrator | 2025-09-19 11:22:30.993926 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 11:22:30.993953 | orchestrator | Friday 19 September 2025 11:22:29 +0000 (0:00:00.151) 0:00:13.359 ****** 2025-09-19 11:22:30.993965 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:30.993976 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:30.993987 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.993997 | orchestrator | 2025-09-19 11:22:30.994008 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 11:22:30.994109 | orchestrator | Friday 19 September 2025 11:22:29 +0000 (0:00:00.382) 0:00:13.741 ****** 2025-09-19 11:22:30.994132 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.994151 | orchestrator | 2025-09-19 11:22:30.994171 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 11:22:30.994190 | orchestrator | Friday 19 September 2025 11:22:29 +0000 (0:00:00.162) 0:00:13.904 ****** 2025-09-19 11:22:30.994206 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:30.994237 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:30.994249 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.994260 | orchestrator | 2025-09-19 11:22:30.994271 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 11:22:30.994282 | orchestrator | Friday 19 September 2025 11:22:29 +0000 (0:00:00.166) 0:00:14.070 ****** 2025-09-19 11:22:30.994292 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.994303 | orchestrator | 2025-09-19 11:22:30.994314 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 11:22:30.994324 | orchestrator | Friday 19 September 2025 11:22:29 +0000 (0:00:00.148) 0:00:14.219 ****** 2025-09-19 11:22:30.994335 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:30.994346 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:30.994357 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.994367 | orchestrator | 2025-09-19 11:22:30.994378 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 11:22:30.994389 | orchestrator | Friday 19 September 2025 11:22:30 +0000 (0:00:00.163) 0:00:14.383 ****** 2025-09-19 11:22:30.994400 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:22:30.994411 | orchestrator | 2025-09-19 11:22:30.994422 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 11:22:30.994432 | orchestrator | Friday 19 September 2025 11:22:30 +0000 (0:00:00.150) 0:00:14.533 ****** 2025-09-19 11:22:30.994443 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:30.994461 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:30.994472 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.994483 | orchestrator | 2025-09-19 11:22:30.994494 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 11:22:30.994504 | orchestrator | Friday 19 September 2025 11:22:30 +0000 (0:00:00.170) 0:00:14.704 ****** 2025-09-19 11:22:30.994515 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:30.994526 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:30.994537 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.994548 | orchestrator | 2025-09-19 11:22:30.994558 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 11:22:30.994569 | orchestrator | Friday 19 September 2025 11:22:30 +0000 (0:00:00.158) 0:00:14.863 ****** 2025-09-19 11:22:30.994579 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:30.994590 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:30.994601 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.994612 | orchestrator | 2025-09-19 11:22:30.994623 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 11:22:30.994634 | orchestrator | Friday 19 September 2025 11:22:30 +0000 (0:00:00.159) 0:00:15.022 ****** 2025-09-19 11:22:30.994644 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.994655 | orchestrator | 2025-09-19 11:22:30.994666 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 11:22:30.994684 | orchestrator | Friday 19 September 2025 11:22:30 +0000 (0:00:00.129) 0:00:15.152 ****** 2025-09-19 11:22:30.994694 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:30.994705 | orchestrator | 2025-09-19 11:22:30.994725 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 11:22:38.084414 | orchestrator | Friday 19 September 2025 11:22:30 +0000 (0:00:00.189) 0:00:15.342 ****** 2025-09-19 11:22:38.084522 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.084537 | orchestrator | 2025-09-19 11:22:38.084550 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 11:22:38.084562 | orchestrator | Friday 19 September 2025 11:22:31 +0000 (0:00:00.142) 0:00:15.484 ****** 2025-09-19 11:22:38.084573 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:22:38.084584 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 11:22:38.084595 | orchestrator | } 2025-09-19 11:22:38.084607 | orchestrator | 2025-09-19 11:22:38.084618 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 11:22:38.084629 | orchestrator | Friday 19 September 2025 11:22:31 +0000 (0:00:00.574) 0:00:16.058 ****** 2025-09-19 11:22:38.084640 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:22:38.084651 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 11:22:38.084662 | orchestrator | } 2025-09-19 11:22:38.084673 | orchestrator | 2025-09-19 11:22:38.084684 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 11:22:38.084695 | orchestrator | Friday 19 September 2025 11:22:31 +0000 (0:00:00.168) 0:00:16.226 ****** 2025-09-19 11:22:38.084706 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:22:38.084716 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 11:22:38.084728 | orchestrator | } 2025-09-19 11:22:38.084739 | orchestrator | 2025-09-19 11:22:38.084750 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 11:22:38.084762 | orchestrator | Friday 19 September 2025 11:22:32 +0000 (0:00:00.167) 0:00:16.395 ****** 2025-09-19 11:22:38.084773 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:22:38.084784 | orchestrator | 2025-09-19 11:22:38.084795 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 11:22:38.084806 | orchestrator | Friday 19 September 2025 11:22:32 +0000 (0:00:00.725) 0:00:17.120 ****** 2025-09-19 11:22:38.084816 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:22:38.084827 | orchestrator | 2025-09-19 11:22:38.084838 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 11:22:38.084849 | orchestrator | Friday 19 September 2025 11:22:33 +0000 (0:00:00.583) 0:00:17.704 ****** 2025-09-19 11:22:38.084998 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:22:38.085012 | orchestrator | 2025-09-19 11:22:38.085024 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 11:22:38.085037 | orchestrator | Friday 19 September 2025 11:22:33 +0000 (0:00:00.568) 0:00:18.272 ****** 2025-09-19 11:22:38.085050 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:22:38.085062 | orchestrator | 2025-09-19 11:22:38.085076 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 11:22:38.085088 | orchestrator | Friday 19 September 2025 11:22:34 +0000 (0:00:00.177) 0:00:18.450 ****** 2025-09-19 11:22:38.085115 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085128 | orchestrator | 2025-09-19 11:22:38.085152 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 11:22:38.085165 | orchestrator | Friday 19 September 2025 11:22:34 +0000 (0:00:00.135) 0:00:18.585 ****** 2025-09-19 11:22:38.085177 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085190 | orchestrator | 2025-09-19 11:22:38.085202 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 11:22:38.085215 | orchestrator | Friday 19 September 2025 11:22:34 +0000 (0:00:00.125) 0:00:18.711 ****** 2025-09-19 11:22:38.085227 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:22:38.085267 | orchestrator |  "vgs_report": { 2025-09-19 11:22:38.085288 | orchestrator |  "vg": [] 2025-09-19 11:22:38.085304 | orchestrator |  } 2025-09-19 11:22:38.085315 | orchestrator | } 2025-09-19 11:22:38.085326 | orchestrator | 2025-09-19 11:22:38.085337 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 11:22:38.085348 | orchestrator | Friday 19 September 2025 11:22:34 +0000 (0:00:00.152) 0:00:18.863 ****** 2025-09-19 11:22:38.085359 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085370 | orchestrator | 2025-09-19 11:22:38.085381 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 11:22:38.085392 | orchestrator | Friday 19 September 2025 11:22:34 +0000 (0:00:00.150) 0:00:19.014 ****** 2025-09-19 11:22:38.085402 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085413 | orchestrator | 2025-09-19 11:22:38.085423 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 11:22:38.085434 | orchestrator | Friday 19 September 2025 11:22:34 +0000 (0:00:00.159) 0:00:19.173 ****** 2025-09-19 11:22:38.085445 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085456 | orchestrator | 2025-09-19 11:22:38.085466 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 11:22:38.085477 | orchestrator | Friday 19 September 2025 11:22:35 +0000 (0:00:00.360) 0:00:19.534 ****** 2025-09-19 11:22:38.085487 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085498 | orchestrator | 2025-09-19 11:22:38.085509 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 11:22:38.085527 | orchestrator | Friday 19 September 2025 11:22:35 +0000 (0:00:00.134) 0:00:19.669 ****** 2025-09-19 11:22:38.085543 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085554 | orchestrator | 2025-09-19 11:22:38.085583 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 11:22:38.085594 | orchestrator | Friday 19 September 2025 11:22:35 +0000 (0:00:00.144) 0:00:19.813 ****** 2025-09-19 11:22:38.085605 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085615 | orchestrator | 2025-09-19 11:22:38.085626 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 11:22:38.085637 | orchestrator | Friday 19 September 2025 11:22:35 +0000 (0:00:00.138) 0:00:19.952 ****** 2025-09-19 11:22:38.085647 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085658 | orchestrator | 2025-09-19 11:22:38.085669 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 11:22:38.085679 | orchestrator | Friday 19 September 2025 11:22:35 +0000 (0:00:00.145) 0:00:20.098 ****** 2025-09-19 11:22:38.085690 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085701 | orchestrator | 2025-09-19 11:22:38.085712 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 11:22:38.085741 | orchestrator | Friday 19 September 2025 11:22:35 +0000 (0:00:00.141) 0:00:20.240 ****** 2025-09-19 11:22:38.085757 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085773 | orchestrator | 2025-09-19 11:22:38.085784 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 11:22:38.085795 | orchestrator | Friday 19 September 2025 11:22:36 +0000 (0:00:00.153) 0:00:20.394 ****** 2025-09-19 11:22:38.085805 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085816 | orchestrator | 2025-09-19 11:22:38.085826 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 11:22:38.085837 | orchestrator | Friday 19 September 2025 11:22:36 +0000 (0:00:00.159) 0:00:20.554 ****** 2025-09-19 11:22:38.085848 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085883 | orchestrator | 2025-09-19 11:22:38.085894 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 11:22:38.085905 | orchestrator | Friday 19 September 2025 11:22:36 +0000 (0:00:00.173) 0:00:20.727 ****** 2025-09-19 11:22:38.085916 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085927 | orchestrator | 2025-09-19 11:22:38.085937 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 11:22:38.085963 | orchestrator | Friday 19 September 2025 11:22:36 +0000 (0:00:00.167) 0:00:20.895 ****** 2025-09-19 11:22:38.085974 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.085984 | orchestrator | 2025-09-19 11:22:38.085995 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 11:22:38.086006 | orchestrator | Friday 19 September 2025 11:22:36 +0000 (0:00:00.151) 0:00:21.046 ****** 2025-09-19 11:22:38.086115 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.086130 | orchestrator | 2025-09-19 11:22:38.086141 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 11:22:38.086152 | orchestrator | Friday 19 September 2025 11:22:36 +0000 (0:00:00.131) 0:00:21.178 ****** 2025-09-19 11:22:38.086164 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:38.086177 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:38.086187 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.086204 | orchestrator | 2025-09-19 11:22:38.086219 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 11:22:38.086229 | orchestrator | Friday 19 September 2025 11:22:37 +0000 (0:00:00.185) 0:00:21.363 ****** 2025-09-19 11:22:38.086240 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:38.086251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:38.086262 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.086272 | orchestrator | 2025-09-19 11:22:38.086283 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 11:22:38.086294 | orchestrator | Friday 19 September 2025 11:22:37 +0000 (0:00:00.365) 0:00:21.729 ****** 2025-09-19 11:22:38.086312 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:38.086323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:38.086334 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.086345 | orchestrator | 2025-09-19 11:22:38.086355 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 11:22:38.086366 | orchestrator | Friday 19 September 2025 11:22:37 +0000 (0:00:00.185) 0:00:21.914 ****** 2025-09-19 11:22:38.086376 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:38.086387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:38.086398 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.086408 | orchestrator | 2025-09-19 11:22:38.086419 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 11:22:38.086430 | orchestrator | Friday 19 September 2025 11:22:37 +0000 (0:00:00.209) 0:00:22.124 ****** 2025-09-19 11:22:38.086440 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:38.086451 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:38.086465 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:38.086481 | orchestrator | 2025-09-19 11:22:38.086493 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 11:22:38.086512 | orchestrator | Friday 19 September 2025 11:22:37 +0000 (0:00:00.149) 0:00:22.273 ****** 2025-09-19 11:22:38.086523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:38.086542 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:43.992784 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:43.992943 | orchestrator | 2025-09-19 11:22:43.992961 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 11:22:43.992974 | orchestrator | Friday 19 September 2025 11:22:38 +0000 (0:00:00.159) 0:00:22.433 ****** 2025-09-19 11:22:43.992986 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:43.992999 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:43.993010 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:43.993021 | orchestrator | 2025-09-19 11:22:43.993033 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 11:22:43.993044 | orchestrator | Friday 19 September 2025 11:22:38 +0000 (0:00:00.161) 0:00:22.594 ****** 2025-09-19 11:22:43.993054 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:43.993065 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:43.993076 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:43.993087 | orchestrator | 2025-09-19 11:22:43.993098 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 11:22:43.993109 | orchestrator | Friday 19 September 2025 11:22:38 +0000 (0:00:00.155) 0:00:22.750 ****** 2025-09-19 11:22:43.993120 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:22:43.993131 | orchestrator | 2025-09-19 11:22:43.993142 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 11:22:43.993153 | orchestrator | Friday 19 September 2025 11:22:38 +0000 (0:00:00.586) 0:00:23.336 ****** 2025-09-19 11:22:43.993164 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:22:43.993174 | orchestrator | 2025-09-19 11:22:43.993185 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 11:22:43.993196 | orchestrator | Friday 19 September 2025 11:22:39 +0000 (0:00:00.547) 0:00:23.884 ****** 2025-09-19 11:22:43.993206 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:22:43.993217 | orchestrator | 2025-09-19 11:22:43.993228 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 11:22:43.993239 | orchestrator | Friday 19 September 2025 11:22:39 +0000 (0:00:00.189) 0:00:24.074 ****** 2025-09-19 11:22:43.993249 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'vg_name': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'}) 2025-09-19 11:22:43.993261 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'vg_name': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'}) 2025-09-19 11:22:43.993272 | orchestrator | 2025-09-19 11:22:43.993283 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 11:22:43.993294 | orchestrator | Friday 19 September 2025 11:22:39 +0000 (0:00:00.237) 0:00:24.311 ****** 2025-09-19 11:22:43.993305 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:43.993316 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:43.993356 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:43.993369 | orchestrator | 2025-09-19 11:22:43.993382 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 11:22:43.993394 | orchestrator | Friday 19 September 2025 11:22:40 +0000 (0:00:00.200) 0:00:24.511 ****** 2025-09-19 11:22:43.993407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:43.993419 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:43.993431 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:43.993443 | orchestrator | 2025-09-19 11:22:43.993455 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 11:22:43.993468 | orchestrator | Friday 19 September 2025 11:22:40 +0000 (0:00:00.365) 0:00:24.876 ****** 2025-09-19 11:22:43.993479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'})  2025-09-19 11:22:43.993494 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'})  2025-09-19 11:22:43.993506 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:22:43.993518 | orchestrator | 2025-09-19 11:22:43.993530 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 11:22:43.993542 | orchestrator | Friday 19 September 2025 11:22:40 +0000 (0:00:00.214) 0:00:25.091 ****** 2025-09-19 11:22:43.993554 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:22:43.993566 | orchestrator |  "lvm_report": { 2025-09-19 11:22:43.993579 | orchestrator |  "lv": [ 2025-09-19 11:22:43.993592 | orchestrator |  { 2025-09-19 11:22:43.993620 | orchestrator |  "lv_name": "osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5", 2025-09-19 11:22:43.993633 | orchestrator |  "vg_name": "ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5" 2025-09-19 11:22:43.993643 | orchestrator |  }, 2025-09-19 11:22:43.993654 | orchestrator |  { 2025-09-19 11:22:43.993665 | orchestrator |  "lv_name": "osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9", 2025-09-19 11:22:43.993676 | orchestrator |  "vg_name": "ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9" 2025-09-19 11:22:43.993686 | orchestrator |  } 2025-09-19 11:22:43.993697 | orchestrator |  ], 2025-09-19 11:22:43.993708 | orchestrator |  "pv": [ 2025-09-19 11:22:43.993718 | orchestrator |  { 2025-09-19 11:22:43.993729 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 11:22:43.993740 | orchestrator |  "vg_name": "ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9" 2025-09-19 11:22:43.993751 | orchestrator |  }, 2025-09-19 11:22:43.993761 | orchestrator |  { 2025-09-19 11:22:43.993772 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 11:22:43.993782 | orchestrator |  "vg_name": "ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5" 2025-09-19 11:22:43.993793 | orchestrator |  } 2025-09-19 11:22:43.993804 | orchestrator |  ] 2025-09-19 11:22:43.993814 | orchestrator |  } 2025-09-19 11:22:43.993825 | orchestrator | } 2025-09-19 11:22:43.993836 | orchestrator | 2025-09-19 11:22:43.993847 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 11:22:43.993875 | orchestrator | 2025-09-19 11:22:43.993886 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 11:22:43.993897 | orchestrator | Friday 19 September 2025 11:22:41 +0000 (0:00:00.313) 0:00:25.405 ****** 2025-09-19 11:22:43.993908 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 11:22:43.993919 | orchestrator | 2025-09-19 11:22:43.993939 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 11:22:43.993950 | orchestrator | Friday 19 September 2025 11:22:41 +0000 (0:00:00.308) 0:00:25.713 ****** 2025-09-19 11:22:43.993961 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:22:43.993972 | orchestrator | 2025-09-19 11:22:43.993983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:43.993993 | orchestrator | Friday 19 September 2025 11:22:41 +0000 (0:00:00.256) 0:00:25.969 ****** 2025-09-19 11:22:43.994088 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-19 11:22:43.994103 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-19 11:22:43.994114 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-19 11:22:43.994125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-19 11:22:43.994135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-19 11:22:43.994146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-19 11:22:43.994157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-19 11:22:43.994167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-19 11:22:43.994183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-19 11:22:43.994194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-19 11:22:43.994204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-19 11:22:43.994215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-19 11:22:43.994241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-19 11:22:43.994252 | orchestrator | 2025-09-19 11:22:43.994264 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:43.994285 | orchestrator | Friday 19 September 2025 11:22:42 +0000 (0:00:00.427) 0:00:26.397 ****** 2025-09-19 11:22:43.994296 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:43.994307 | orchestrator | 2025-09-19 11:22:43.994318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:43.994328 | orchestrator | Friday 19 September 2025 11:22:42 +0000 (0:00:00.203) 0:00:26.601 ****** 2025-09-19 11:22:43.994339 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:43.994350 | orchestrator | 2025-09-19 11:22:43.994360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:43.994371 | orchestrator | Friday 19 September 2025 11:22:42 +0000 (0:00:00.216) 0:00:26.818 ****** 2025-09-19 11:22:43.994381 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:43.994392 | orchestrator | 2025-09-19 11:22:43.994403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:43.994414 | orchestrator | Friday 19 September 2025 11:22:42 +0000 (0:00:00.228) 0:00:27.047 ****** 2025-09-19 11:22:43.994424 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:43.994435 | orchestrator | 2025-09-19 11:22:43.994446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:43.994456 | orchestrator | Friday 19 September 2025 11:22:43 +0000 (0:00:00.628) 0:00:27.675 ****** 2025-09-19 11:22:43.994467 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:43.994478 | orchestrator | 2025-09-19 11:22:43.994488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:43.994499 | orchestrator | Friday 19 September 2025 11:22:43 +0000 (0:00:00.241) 0:00:27.917 ****** 2025-09-19 11:22:43.994509 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:43.994520 | orchestrator | 2025-09-19 11:22:43.994531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:43.994551 | orchestrator | Friday 19 September 2025 11:22:43 +0000 (0:00:00.222) 0:00:28.139 ****** 2025-09-19 11:22:43.994562 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:43.994573 | orchestrator | 2025-09-19 11:22:43.994592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:55.368497 | orchestrator | Friday 19 September 2025 11:22:43 +0000 (0:00:00.194) 0:00:28.333 ****** 2025-09-19 11:22:55.368608 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.368625 | orchestrator | 2025-09-19 11:22:55.368638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:55.368650 | orchestrator | Friday 19 September 2025 11:22:44 +0000 (0:00:00.191) 0:00:28.524 ****** 2025-09-19 11:22:55.368662 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2) 2025-09-19 11:22:55.368674 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2) 2025-09-19 11:22:55.368685 | orchestrator | 2025-09-19 11:22:55.368696 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:55.368707 | orchestrator | Friday 19 September 2025 11:22:44 +0000 (0:00:00.449) 0:00:28.974 ****** 2025-09-19 11:22:55.368718 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a7da52da-8ff9-443f-9c01-2997209c642a) 2025-09-19 11:22:55.368728 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a7da52da-8ff9-443f-9c01-2997209c642a) 2025-09-19 11:22:55.368739 | orchestrator | 2025-09-19 11:22:55.368750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:55.368761 | orchestrator | Friday 19 September 2025 11:22:45 +0000 (0:00:00.441) 0:00:29.416 ****** 2025-09-19 11:22:55.368772 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2d05b72c-4493-4412-ad25-c0b6cbf3de12) 2025-09-19 11:22:55.368782 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2d05b72c-4493-4412-ad25-c0b6cbf3de12) 2025-09-19 11:22:55.368793 | orchestrator | 2025-09-19 11:22:55.368804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:55.368815 | orchestrator | Friday 19 September 2025 11:22:45 +0000 (0:00:00.459) 0:00:29.875 ****** 2025-09-19 11:22:55.368826 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c) 2025-09-19 11:22:55.368837 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c) 2025-09-19 11:22:55.368892 | orchestrator | 2025-09-19 11:22:55.368903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:22:55.368915 | orchestrator | Friday 19 September 2025 11:22:45 +0000 (0:00:00.437) 0:00:30.312 ****** 2025-09-19 11:22:55.368925 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 11:22:55.368937 | orchestrator | 2025-09-19 11:22:55.368947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.368959 | orchestrator | Friday 19 September 2025 11:22:46 +0000 (0:00:00.387) 0:00:30.700 ****** 2025-09-19 11:22:55.368969 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-19 11:22:55.368997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-19 11:22:55.369008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-19 11:22:55.369019 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-19 11:22:55.369030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-19 11:22:55.369041 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-19 11:22:55.369051 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-19 11:22:55.369086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-19 11:22:55.369098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-19 11:22:55.369109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-19 11:22:55.369120 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-19 11:22:55.369130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-19 11:22:55.369141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-19 11:22:55.369152 | orchestrator | 2025-09-19 11:22:55.369163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.369174 | orchestrator | Friday 19 September 2025 11:22:47 +0000 (0:00:00.845) 0:00:31.545 ****** 2025-09-19 11:22:55.369185 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.369195 | orchestrator | 2025-09-19 11:22:55.369206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.369217 | orchestrator | Friday 19 September 2025 11:22:47 +0000 (0:00:00.228) 0:00:31.774 ****** 2025-09-19 11:22:55.369228 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.369239 | orchestrator | 2025-09-19 11:22:55.369250 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.369261 | orchestrator | Friday 19 September 2025 11:22:47 +0000 (0:00:00.266) 0:00:32.041 ****** 2025-09-19 11:22:55.369272 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.369283 | orchestrator | 2025-09-19 11:22:55.369294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.369305 | orchestrator | Friday 19 September 2025 11:22:47 +0000 (0:00:00.231) 0:00:32.272 ****** 2025-09-19 11:22:55.369315 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.369326 | orchestrator | 2025-09-19 11:22:55.369354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.369366 | orchestrator | Friday 19 September 2025 11:22:48 +0000 (0:00:00.218) 0:00:32.491 ****** 2025-09-19 11:22:55.369376 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.369387 | orchestrator | 2025-09-19 11:22:55.369398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.369408 | orchestrator | Friday 19 September 2025 11:22:48 +0000 (0:00:00.224) 0:00:32.715 ****** 2025-09-19 11:22:55.369419 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.369430 | orchestrator | 2025-09-19 11:22:55.369440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.369451 | orchestrator | Friday 19 September 2025 11:22:48 +0000 (0:00:00.296) 0:00:33.012 ****** 2025-09-19 11:22:55.369462 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.369472 | orchestrator | 2025-09-19 11:22:55.369483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.369493 | orchestrator | Friday 19 September 2025 11:22:48 +0000 (0:00:00.283) 0:00:33.295 ****** 2025-09-19 11:22:55.369504 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.369515 | orchestrator | 2025-09-19 11:22:55.369525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.369536 | orchestrator | Friday 19 September 2025 11:22:49 +0000 (0:00:00.257) 0:00:33.552 ****** 2025-09-19 11:22:55.369547 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-19 11:22:55.369557 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-19 11:22:55.369568 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-19 11:22:55.369579 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-19 11:22:55.369589 | orchestrator | 2025-09-19 11:22:55.369600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.369612 | orchestrator | Friday 19 September 2025 11:22:50 +0000 (0:00:00.936) 0:00:34.489 ****** 2025-09-19 11:22:55.369631 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.369642 | orchestrator | 2025-09-19 11:22:55.369652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.369663 | orchestrator | Friday 19 September 2025 11:22:50 +0000 (0:00:00.235) 0:00:34.724 ****** 2025-09-19 11:22:55.369674 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.369685 | orchestrator | 2025-09-19 11:22:55.369695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.369706 | orchestrator | Friday 19 September 2025 11:22:50 +0000 (0:00:00.266) 0:00:34.991 ****** 2025-09-19 11:22:55.369717 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.369727 | orchestrator | 2025-09-19 11:22:55.369738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:22:55.369749 | orchestrator | Friday 19 September 2025 11:22:51 +0000 (0:00:00.962) 0:00:35.953 ****** 2025-09-19 11:22:55.369759 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.369770 | orchestrator | 2025-09-19 11:22:55.369781 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 11:22:55.369792 | orchestrator | Friday 19 September 2025 11:22:51 +0000 (0:00:00.199) 0:00:36.153 ****** 2025-09-19 11:22:55.369803 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.369813 | orchestrator | 2025-09-19 11:22:55.369824 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 11:22:55.369835 | orchestrator | Friday 19 September 2025 11:22:51 +0000 (0:00:00.135) 0:00:36.288 ****** 2025-09-19 11:22:55.369866 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '499bb3ba-5d36-55d4-9ab4-77fea8769c5a'}}) 2025-09-19 11:22:55.369878 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '482defc3-95b3-50a2-a4e9-5dea1f7a25a6'}}) 2025-09-19 11:22:55.369889 | orchestrator | 2025-09-19 11:22:55.369900 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 11:22:55.369911 | orchestrator | Friday 19 September 2025 11:22:52 +0000 (0:00:00.207) 0:00:36.496 ****** 2025-09-19 11:22:55.369923 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'}) 2025-09-19 11:22:55.369934 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'}) 2025-09-19 11:22:55.369945 | orchestrator | 2025-09-19 11:22:55.369956 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 11:22:55.369967 | orchestrator | Friday 19 September 2025 11:22:53 +0000 (0:00:01.785) 0:00:38.281 ****** 2025-09-19 11:22:55.369978 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:22:55.369990 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:22:55.370001 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:22:55.370012 | orchestrator | 2025-09-19 11:22:55.370079 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 11:22:55.370091 | orchestrator | Friday 19 September 2025 11:22:54 +0000 (0:00:00.162) 0:00:38.444 ****** 2025-09-19 11:22:55.370102 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'}) 2025-09-19 11:22:55.370112 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'}) 2025-09-19 11:22:55.370123 | orchestrator | 2025-09-19 11:22:55.370141 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 11:23:01.172313 | orchestrator | Friday 19 September 2025 11:22:55 +0000 (0:00:01.269) 0:00:39.713 ****** 2025-09-19 11:23:01.172453 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:01.172471 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:01.172483 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.172495 | orchestrator | 2025-09-19 11:23:01.172507 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 11:23:01.172519 | orchestrator | Friday 19 September 2025 11:22:55 +0000 (0:00:00.141) 0:00:39.854 ****** 2025-09-19 11:23:01.172530 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.172541 | orchestrator | 2025-09-19 11:23:01.172552 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 11:23:01.172563 | orchestrator | Friday 19 September 2025 11:22:55 +0000 (0:00:00.139) 0:00:39.994 ****** 2025-09-19 11:23:01.172574 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:01.172603 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:01.172615 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.172626 | orchestrator | 2025-09-19 11:23:01.172637 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 11:23:01.172648 | orchestrator | Friday 19 September 2025 11:22:55 +0000 (0:00:00.152) 0:00:40.146 ****** 2025-09-19 11:23:01.172659 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.172670 | orchestrator | 2025-09-19 11:23:01.172681 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 11:23:01.172692 | orchestrator | Friday 19 September 2025 11:22:55 +0000 (0:00:00.122) 0:00:40.269 ****** 2025-09-19 11:23:01.172703 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:01.172714 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:01.172725 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.172735 | orchestrator | 2025-09-19 11:23:01.172747 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 11:23:01.172758 | orchestrator | Friday 19 September 2025 11:22:56 +0000 (0:00:00.151) 0:00:40.421 ****** 2025-09-19 11:23:01.172769 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.172780 | orchestrator | 2025-09-19 11:23:01.172796 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 11:23:01.172808 | orchestrator | Friday 19 September 2025 11:22:56 +0000 (0:00:00.462) 0:00:40.883 ****** 2025-09-19 11:23:01.172819 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:01.172830 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:01.172871 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.172884 | orchestrator | 2025-09-19 11:23:01.172897 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 11:23:01.172909 | orchestrator | Friday 19 September 2025 11:22:56 +0000 (0:00:00.157) 0:00:41.041 ****** 2025-09-19 11:23:01.172923 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:23:01.172937 | orchestrator | 2025-09-19 11:23:01.172950 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 11:23:01.172963 | orchestrator | Friday 19 September 2025 11:22:56 +0000 (0:00:00.149) 0:00:41.190 ****** 2025-09-19 11:23:01.172984 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:01.172995 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:01.173007 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.173018 | orchestrator | 2025-09-19 11:23:01.173029 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 11:23:01.173040 | orchestrator | Friday 19 September 2025 11:22:57 +0000 (0:00:00.170) 0:00:41.360 ****** 2025-09-19 11:23:01.173051 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:01.173062 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:01.173072 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.173083 | orchestrator | 2025-09-19 11:23:01.173094 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 11:23:01.173105 | orchestrator | Friday 19 September 2025 11:22:57 +0000 (0:00:00.151) 0:00:41.512 ****** 2025-09-19 11:23:01.173133 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:01.173145 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:01.173156 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.173166 | orchestrator | 2025-09-19 11:23:01.173177 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 11:23:01.173188 | orchestrator | Friday 19 September 2025 11:22:57 +0000 (0:00:00.157) 0:00:41.670 ****** 2025-09-19 11:23:01.173198 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.173209 | orchestrator | 2025-09-19 11:23:01.173220 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 11:23:01.173231 | orchestrator | Friday 19 September 2025 11:22:57 +0000 (0:00:00.185) 0:00:41.855 ****** 2025-09-19 11:23:01.173241 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.173252 | orchestrator | 2025-09-19 11:23:01.173263 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 11:23:01.173274 | orchestrator | Friday 19 September 2025 11:22:57 +0000 (0:00:00.165) 0:00:42.021 ****** 2025-09-19 11:23:01.173284 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.173295 | orchestrator | 2025-09-19 11:23:01.173306 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 11:23:01.173317 | orchestrator | Friday 19 September 2025 11:22:57 +0000 (0:00:00.171) 0:00:42.193 ****** 2025-09-19 11:23:01.173327 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:23:01.173338 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 11:23:01.173350 | orchestrator | } 2025-09-19 11:23:01.173361 | orchestrator | 2025-09-19 11:23:01.173372 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 11:23:01.173383 | orchestrator | Friday 19 September 2025 11:22:58 +0000 (0:00:00.165) 0:00:42.359 ****** 2025-09-19 11:23:01.173393 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:23:01.173404 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 11:23:01.173415 | orchestrator | } 2025-09-19 11:23:01.173425 | orchestrator | 2025-09-19 11:23:01.173436 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 11:23:01.173447 | orchestrator | Friday 19 September 2025 11:22:58 +0000 (0:00:00.175) 0:00:42.534 ****** 2025-09-19 11:23:01.173458 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:23:01.173468 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 11:23:01.173480 | orchestrator | } 2025-09-19 11:23:01.173498 | orchestrator | 2025-09-19 11:23:01.173509 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 11:23:01.173520 | orchestrator | Friday 19 September 2025 11:22:58 +0000 (0:00:00.174) 0:00:42.708 ****** 2025-09-19 11:23:01.173531 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:23:01.173542 | orchestrator | 2025-09-19 11:23:01.173552 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 11:23:01.173563 | orchestrator | Friday 19 September 2025 11:22:59 +0000 (0:00:00.683) 0:00:43.392 ****** 2025-09-19 11:23:01.173574 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:23:01.173585 | orchestrator | 2025-09-19 11:23:01.173601 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 11:23:01.173612 | orchestrator | Friday 19 September 2025 11:22:59 +0000 (0:00:00.507) 0:00:43.900 ****** 2025-09-19 11:23:01.173623 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:23:01.173634 | orchestrator | 2025-09-19 11:23:01.173645 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 11:23:01.173655 | orchestrator | Friday 19 September 2025 11:23:00 +0000 (0:00:00.492) 0:00:44.393 ****** 2025-09-19 11:23:01.173666 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:23:01.173677 | orchestrator | 2025-09-19 11:23:01.173688 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 11:23:01.173699 | orchestrator | Friday 19 September 2025 11:23:00 +0000 (0:00:00.135) 0:00:44.529 ****** 2025-09-19 11:23:01.173709 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.173720 | orchestrator | 2025-09-19 11:23:01.173731 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 11:23:01.173742 | orchestrator | Friday 19 September 2025 11:23:00 +0000 (0:00:00.118) 0:00:44.648 ****** 2025-09-19 11:23:01.173753 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.173763 | orchestrator | 2025-09-19 11:23:01.173774 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 11:23:01.173785 | orchestrator | Friday 19 September 2025 11:23:00 +0000 (0:00:00.143) 0:00:44.791 ****** 2025-09-19 11:23:01.173796 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:23:01.173807 | orchestrator |  "vgs_report": { 2025-09-19 11:23:01.173818 | orchestrator |  "vg": [] 2025-09-19 11:23:01.173829 | orchestrator |  } 2025-09-19 11:23:01.173870 | orchestrator | } 2025-09-19 11:23:01.173882 | orchestrator | 2025-09-19 11:23:01.173893 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 11:23:01.173904 | orchestrator | Friday 19 September 2025 11:23:00 +0000 (0:00:00.144) 0:00:44.936 ****** 2025-09-19 11:23:01.173914 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.173925 | orchestrator | 2025-09-19 11:23:01.173936 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 11:23:01.173947 | orchestrator | Friday 19 September 2025 11:23:00 +0000 (0:00:00.134) 0:00:45.070 ****** 2025-09-19 11:23:01.173958 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.173968 | orchestrator | 2025-09-19 11:23:01.173979 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 11:23:01.173990 | orchestrator | Friday 19 September 2025 11:23:00 +0000 (0:00:00.131) 0:00:45.202 ****** 2025-09-19 11:23:01.174001 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.174014 | orchestrator | 2025-09-19 11:23:01.174118 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 11:23:01.174139 | orchestrator | Friday 19 September 2025 11:23:01 +0000 (0:00:00.182) 0:00:45.384 ****** 2025-09-19 11:23:01.174158 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:01.174177 | orchestrator | 2025-09-19 11:23:01.174197 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 11:23:01.174218 | orchestrator | Friday 19 September 2025 11:23:01 +0000 (0:00:00.136) 0:00:45.521 ****** 2025-09-19 11:23:06.504019 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504095 | orchestrator | 2025-09-19 11:23:06.504104 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 11:23:06.504127 | orchestrator | Friday 19 September 2025 11:23:01 +0000 (0:00:00.149) 0:00:45.670 ****** 2025-09-19 11:23:06.504132 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504136 | orchestrator | 2025-09-19 11:23:06.504141 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 11:23:06.504145 | orchestrator | Friday 19 September 2025 11:23:01 +0000 (0:00:00.459) 0:00:46.130 ****** 2025-09-19 11:23:06.504150 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504154 | orchestrator | 2025-09-19 11:23:06.504158 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 11:23:06.504163 | orchestrator | Friday 19 September 2025 11:23:01 +0000 (0:00:00.141) 0:00:46.271 ****** 2025-09-19 11:23:06.504167 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504171 | orchestrator | 2025-09-19 11:23:06.504176 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 11:23:06.504180 | orchestrator | Friday 19 September 2025 11:23:02 +0000 (0:00:00.166) 0:00:46.437 ****** 2025-09-19 11:23:06.504184 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504188 | orchestrator | 2025-09-19 11:23:06.504192 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 11:23:06.504197 | orchestrator | Friday 19 September 2025 11:23:02 +0000 (0:00:00.152) 0:00:46.590 ****** 2025-09-19 11:23:06.504201 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504205 | orchestrator | 2025-09-19 11:23:06.504209 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 11:23:06.504213 | orchestrator | Friday 19 September 2025 11:23:02 +0000 (0:00:00.153) 0:00:46.744 ****** 2025-09-19 11:23:06.504218 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504222 | orchestrator | 2025-09-19 11:23:06.504226 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 11:23:06.504230 | orchestrator | Friday 19 September 2025 11:23:02 +0000 (0:00:00.150) 0:00:46.894 ****** 2025-09-19 11:23:06.504234 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504239 | orchestrator | 2025-09-19 11:23:06.504243 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 11:23:06.504247 | orchestrator | Friday 19 September 2025 11:23:02 +0000 (0:00:00.151) 0:00:47.045 ****** 2025-09-19 11:23:06.504251 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504255 | orchestrator | 2025-09-19 11:23:06.504259 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 11:23:06.504264 | orchestrator | Friday 19 September 2025 11:23:02 +0000 (0:00:00.171) 0:00:47.216 ****** 2025-09-19 11:23:06.504268 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504272 | orchestrator | 2025-09-19 11:23:06.504276 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 11:23:06.504281 | orchestrator | Friday 19 September 2025 11:23:03 +0000 (0:00:00.153) 0:00:47.370 ****** 2025-09-19 11:23:06.504295 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:06.504302 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:06.504306 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504311 | orchestrator | 2025-09-19 11:23:06.504315 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 11:23:06.504319 | orchestrator | Friday 19 September 2025 11:23:03 +0000 (0:00:00.164) 0:00:47.535 ****** 2025-09-19 11:23:06.504323 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:06.504327 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:06.504337 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504342 | orchestrator | 2025-09-19 11:23:06.504346 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 11:23:06.504350 | orchestrator | Friday 19 September 2025 11:23:03 +0000 (0:00:00.163) 0:00:47.698 ****** 2025-09-19 11:23:06.504354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:06.504358 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:06.504362 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504366 | orchestrator | 2025-09-19 11:23:06.504370 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 11:23:06.504375 | orchestrator | Friday 19 September 2025 11:23:03 +0000 (0:00:00.187) 0:00:47.885 ****** 2025-09-19 11:23:06.504379 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:06.504383 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:06.504387 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504391 | orchestrator | 2025-09-19 11:23:06.504395 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 11:23:06.504410 | orchestrator | Friday 19 September 2025 11:23:04 +0000 (0:00:00.469) 0:00:48.355 ****** 2025-09-19 11:23:06.504414 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:06.504418 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:06.504422 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504426 | orchestrator | 2025-09-19 11:23:06.504430 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 11:23:06.504435 | orchestrator | Friday 19 September 2025 11:23:04 +0000 (0:00:00.179) 0:00:48.534 ****** 2025-09-19 11:23:06.504439 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:06.504443 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:06.504447 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504451 | orchestrator | 2025-09-19 11:23:06.504456 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 11:23:06.504460 | orchestrator | Friday 19 September 2025 11:23:04 +0000 (0:00:00.197) 0:00:48.732 ****** 2025-09-19 11:23:06.504464 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:06.504468 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:06.504472 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504476 | orchestrator | 2025-09-19 11:23:06.504480 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 11:23:06.504485 | orchestrator | Friday 19 September 2025 11:23:04 +0000 (0:00:00.191) 0:00:48.924 ****** 2025-09-19 11:23:06.504489 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:06.504493 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:06.504503 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504507 | orchestrator | 2025-09-19 11:23:06.504511 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 11:23:06.504540 | orchestrator | Friday 19 September 2025 11:23:04 +0000 (0:00:00.170) 0:00:49.094 ****** 2025-09-19 11:23:06.504545 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:23:06.504549 | orchestrator | 2025-09-19 11:23:06.504553 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 11:23:06.504557 | orchestrator | Friday 19 September 2025 11:23:05 +0000 (0:00:00.553) 0:00:49.647 ****** 2025-09-19 11:23:06.504561 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:23:06.504566 | orchestrator | 2025-09-19 11:23:06.504570 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 11:23:06.504574 | orchestrator | Friday 19 September 2025 11:23:05 +0000 (0:00:00.498) 0:00:50.146 ****** 2025-09-19 11:23:06.504579 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:23:06.504583 | orchestrator | 2025-09-19 11:23:06.504588 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 11:23:06.504593 | orchestrator | Friday 19 September 2025 11:23:05 +0000 (0:00:00.158) 0:00:50.305 ****** 2025-09-19 11:23:06.504598 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'vg_name': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'}) 2025-09-19 11:23:06.504604 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'vg_name': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'}) 2025-09-19 11:23:06.504609 | orchestrator | 2025-09-19 11:23:06.504613 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 11:23:06.504618 | orchestrator | Friday 19 September 2025 11:23:06 +0000 (0:00:00.187) 0:00:50.492 ****** 2025-09-19 11:23:06.504623 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:06.504628 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:06.504633 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:06.504638 | orchestrator | 2025-09-19 11:23:06.504642 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 11:23:06.504647 | orchestrator | Friday 19 September 2025 11:23:06 +0000 (0:00:00.171) 0:00:50.664 ****** 2025-09-19 11:23:06.504652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:06.504657 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:06.504665 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:12.604326 | orchestrator | 2025-09-19 11:23:12.604411 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 11:23:12.604425 | orchestrator | Friday 19 September 2025 11:23:06 +0000 (0:00:00.189) 0:00:50.853 ****** 2025-09-19 11:23:12.604437 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'})  2025-09-19 11:23:12.604447 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'})  2025-09-19 11:23:12.604457 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:12.604467 | orchestrator | 2025-09-19 11:23:12.604477 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 11:23:12.604486 | orchestrator | Friday 19 September 2025 11:23:06 +0000 (0:00:00.203) 0:00:51.057 ****** 2025-09-19 11:23:12.604514 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:23:12.604524 | orchestrator |  "lvm_report": { 2025-09-19 11:23:12.604534 | orchestrator |  "lv": [ 2025-09-19 11:23:12.604544 | orchestrator |  { 2025-09-19 11:23:12.604554 | orchestrator |  "lv_name": "osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6", 2025-09-19 11:23:12.604563 | orchestrator |  "vg_name": "ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6" 2025-09-19 11:23:12.604573 | orchestrator |  }, 2025-09-19 11:23:12.604582 | orchestrator |  { 2025-09-19 11:23:12.604592 | orchestrator |  "lv_name": "osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a", 2025-09-19 11:23:12.604601 | orchestrator |  "vg_name": "ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a" 2025-09-19 11:23:12.604611 | orchestrator |  } 2025-09-19 11:23:12.604620 | orchestrator |  ], 2025-09-19 11:23:12.604629 | orchestrator |  "pv": [ 2025-09-19 11:23:12.604639 | orchestrator |  { 2025-09-19 11:23:12.604648 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 11:23:12.604658 | orchestrator |  "vg_name": "ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a" 2025-09-19 11:23:12.604667 | orchestrator |  }, 2025-09-19 11:23:12.604676 | orchestrator |  { 2025-09-19 11:23:12.604686 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 11:23:12.604695 | orchestrator |  "vg_name": "ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6" 2025-09-19 11:23:12.604705 | orchestrator |  } 2025-09-19 11:23:12.604714 | orchestrator |  ] 2025-09-19 11:23:12.604723 | orchestrator |  } 2025-09-19 11:23:12.604733 | orchestrator | } 2025-09-19 11:23:12.604743 | orchestrator | 2025-09-19 11:23:12.604752 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 11:23:12.604762 | orchestrator | 2025-09-19 11:23:12.604771 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 11:23:12.604781 | orchestrator | Friday 19 September 2025 11:23:07 +0000 (0:00:00.648) 0:00:51.705 ****** 2025-09-19 11:23:12.604790 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 11:23:12.604800 | orchestrator | 2025-09-19 11:23:12.604821 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 11:23:12.604882 | orchestrator | Friday 19 September 2025 11:23:07 +0000 (0:00:00.262) 0:00:51.967 ****** 2025-09-19 11:23:12.604893 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:23:12.604903 | orchestrator | 2025-09-19 11:23:12.604915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.604926 | orchestrator | Friday 19 September 2025 11:23:07 +0000 (0:00:00.206) 0:00:52.174 ****** 2025-09-19 11:23:12.604937 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-19 11:23:12.604947 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-19 11:23:12.604956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-19 11:23:12.604966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-19 11:23:12.604975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-19 11:23:12.604985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-19 11:23:12.604994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-19 11:23:12.605004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-19 11:23:12.605013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-19 11:23:12.605022 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-19 11:23:12.605032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-19 11:23:12.605050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-19 11:23:12.605060 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-19 11:23:12.605069 | orchestrator | 2025-09-19 11:23:12.605079 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.605088 | orchestrator | Friday 19 September 2025 11:23:08 +0000 (0:00:00.391) 0:00:52.566 ****** 2025-09-19 11:23:12.605098 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:12.605107 | orchestrator | 2025-09-19 11:23:12.605120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.605130 | orchestrator | Friday 19 September 2025 11:23:08 +0000 (0:00:00.198) 0:00:52.764 ****** 2025-09-19 11:23:12.605139 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:12.605149 | orchestrator | 2025-09-19 11:23:12.605159 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.605183 | orchestrator | Friday 19 September 2025 11:23:08 +0000 (0:00:00.196) 0:00:52.961 ****** 2025-09-19 11:23:12.605193 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:12.605202 | orchestrator | 2025-09-19 11:23:12.605212 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.605221 | orchestrator | Friday 19 September 2025 11:23:08 +0000 (0:00:00.191) 0:00:53.153 ****** 2025-09-19 11:23:12.605231 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:12.605240 | orchestrator | 2025-09-19 11:23:12.605250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.605260 | orchestrator | Friday 19 September 2025 11:23:09 +0000 (0:00:00.207) 0:00:53.360 ****** 2025-09-19 11:23:12.605269 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:12.605279 | orchestrator | 2025-09-19 11:23:12.605288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.605298 | orchestrator | Friday 19 September 2025 11:23:09 +0000 (0:00:00.183) 0:00:53.544 ****** 2025-09-19 11:23:12.605307 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:12.605317 | orchestrator | 2025-09-19 11:23:12.605326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.605336 | orchestrator | Friday 19 September 2025 11:23:09 +0000 (0:00:00.463) 0:00:54.008 ****** 2025-09-19 11:23:12.605345 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:12.605355 | orchestrator | 2025-09-19 11:23:12.605364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.605374 | orchestrator | Friday 19 September 2025 11:23:09 +0000 (0:00:00.199) 0:00:54.208 ****** 2025-09-19 11:23:12.605383 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:12.605393 | orchestrator | 2025-09-19 11:23:12.605402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.605412 | orchestrator | Friday 19 September 2025 11:23:10 +0000 (0:00:00.222) 0:00:54.430 ****** 2025-09-19 11:23:12.605421 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d) 2025-09-19 11:23:12.605431 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d) 2025-09-19 11:23:12.605441 | orchestrator | 2025-09-19 11:23:12.605450 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.605460 | orchestrator | Friday 19 September 2025 11:23:10 +0000 (0:00:00.398) 0:00:54.828 ****** 2025-09-19 11:23:12.605469 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4ab3eba9-7f04-4545-b862-1d19a7d78b14) 2025-09-19 11:23:12.605478 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4ab3eba9-7f04-4545-b862-1d19a7d78b14) 2025-09-19 11:23:12.605488 | orchestrator | 2025-09-19 11:23:12.605498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.605507 | orchestrator | Friday 19 September 2025 11:23:10 +0000 (0:00:00.420) 0:00:55.249 ****** 2025-09-19 11:23:12.605522 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_82c12b62-ffbd-484b-a107-b043e35ec15c) 2025-09-19 11:23:12.605538 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_82c12b62-ffbd-484b-a107-b043e35ec15c) 2025-09-19 11:23:12.605547 | orchestrator | 2025-09-19 11:23:12.605557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.605566 | orchestrator | Friday 19 September 2025 11:23:11 +0000 (0:00:00.427) 0:00:55.677 ****** 2025-09-19 11:23:12.605576 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_23c8bdec-2f7a-480a-98d1-592cee3b582b) 2025-09-19 11:23:12.605585 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_23c8bdec-2f7a-480a-98d1-592cee3b582b) 2025-09-19 11:23:12.605594 | orchestrator | 2025-09-19 11:23:12.605604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 11:23:12.605613 | orchestrator | Friday 19 September 2025 11:23:11 +0000 (0:00:00.481) 0:00:56.158 ****** 2025-09-19 11:23:12.605623 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 11:23:12.605632 | orchestrator | 2025-09-19 11:23:12.605642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:12.605651 | orchestrator | Friday 19 September 2025 11:23:12 +0000 (0:00:00.378) 0:00:56.536 ****** 2025-09-19 11:23:12.605660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-19 11:23:12.605670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-19 11:23:12.605679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-19 11:23:12.605688 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-19 11:23:12.605698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-19 11:23:12.605707 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-19 11:23:12.605716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-19 11:23:12.605726 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-19 11:23:12.605735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-19 11:23:12.605745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-19 11:23:12.605754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-19 11:23:12.605768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-19 11:23:22.089755 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-19 11:23:22.089925 | orchestrator | 2025-09-19 11:23:22.089942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:22.089955 | orchestrator | Friday 19 September 2025 11:23:12 +0000 (0:00:00.409) 0:00:56.946 ****** 2025-09-19 11:23:22.089966 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.089978 | orchestrator | 2025-09-19 11:23:22.089989 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:22.090000 | orchestrator | Friday 19 September 2025 11:23:12 +0000 (0:00:00.211) 0:00:57.157 ****** 2025-09-19 11:23:22.090010 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.090083 | orchestrator | 2025-09-19 11:23:22.090095 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:22.090106 | orchestrator | Friday 19 September 2025 11:23:13 +0000 (0:00:00.203) 0:00:57.361 ****** 2025-09-19 11:23:22.090118 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.090128 | orchestrator | 2025-09-19 11:23:22.090139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:22.090150 | orchestrator | Friday 19 September 2025 11:23:13 +0000 (0:00:00.512) 0:00:57.873 ****** 2025-09-19 11:23:22.090186 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.090197 | orchestrator | 2025-09-19 11:23:22.090208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:22.090219 | orchestrator | Friday 19 September 2025 11:23:13 +0000 (0:00:00.182) 0:00:58.055 ****** 2025-09-19 11:23:22.090230 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.090241 | orchestrator | 2025-09-19 11:23:22.090251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:22.090262 | orchestrator | Friday 19 September 2025 11:23:13 +0000 (0:00:00.217) 0:00:58.273 ****** 2025-09-19 11:23:22.090273 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.090285 | orchestrator | 2025-09-19 11:23:22.090297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:22.090310 | orchestrator | Friday 19 September 2025 11:23:14 +0000 (0:00:00.208) 0:00:58.482 ****** 2025-09-19 11:23:22.090323 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.090335 | orchestrator | 2025-09-19 11:23:22.090348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:22.090360 | orchestrator | Friday 19 September 2025 11:23:14 +0000 (0:00:00.216) 0:00:58.699 ****** 2025-09-19 11:23:22.090373 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.090385 | orchestrator | 2025-09-19 11:23:22.090398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:22.090410 | orchestrator | Friday 19 September 2025 11:23:14 +0000 (0:00:00.221) 0:00:58.920 ****** 2025-09-19 11:23:22.090423 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-19 11:23:22.090436 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-19 11:23:22.090449 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-19 11:23:22.090462 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-19 11:23:22.090474 | orchestrator | 2025-09-19 11:23:22.090487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:22.090499 | orchestrator | Friday 19 September 2025 11:23:15 +0000 (0:00:00.656) 0:00:59.577 ****** 2025-09-19 11:23:22.090511 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.090524 | orchestrator | 2025-09-19 11:23:22.090536 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:22.090549 | orchestrator | Friday 19 September 2025 11:23:15 +0000 (0:00:00.228) 0:00:59.806 ****** 2025-09-19 11:23:22.090562 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.090575 | orchestrator | 2025-09-19 11:23:22.090587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:22.090601 | orchestrator | Friday 19 September 2025 11:23:15 +0000 (0:00:00.224) 0:01:00.031 ****** 2025-09-19 11:23:22.090613 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.090626 | orchestrator | 2025-09-19 11:23:22.090638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 11:23:22.090650 | orchestrator | Friday 19 September 2025 11:23:15 +0000 (0:00:00.214) 0:01:00.245 ****** 2025-09-19 11:23:22.090660 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.090671 | orchestrator | 2025-09-19 11:23:22.090682 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 11:23:22.090699 | orchestrator | Friday 19 September 2025 11:23:16 +0000 (0:00:00.224) 0:01:00.470 ****** 2025-09-19 11:23:22.090718 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.090737 | orchestrator | 2025-09-19 11:23:22.090755 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 11:23:22.090773 | orchestrator | Friday 19 September 2025 11:23:16 +0000 (0:00:00.486) 0:01:00.956 ****** 2025-09-19 11:23:22.090792 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'}}) 2025-09-19 11:23:22.090810 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9f018b0b-9dc8-5104-9bc9-2c288294c8fd'}}) 2025-09-19 11:23:22.090872 | orchestrator | 2025-09-19 11:23:22.090887 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 11:23:22.090903 | orchestrator | Friday 19 September 2025 11:23:16 +0000 (0:00:00.220) 0:01:01.177 ****** 2025-09-19 11:23:22.090921 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'}) 2025-09-19 11:23:22.090939 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'}) 2025-09-19 11:23:22.090957 | orchestrator | 2025-09-19 11:23:22.090975 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 11:23:22.091017 | orchestrator | Friday 19 September 2025 11:23:18 +0000 (0:00:01.868) 0:01:03.046 ****** 2025-09-19 11:23:22.091038 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:22.091060 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:22.091078 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.091096 | orchestrator | 2025-09-19 11:23:22.091107 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 11:23:22.091118 | orchestrator | Friday 19 September 2025 11:23:18 +0000 (0:00:00.161) 0:01:03.207 ****** 2025-09-19 11:23:22.091129 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'}) 2025-09-19 11:23:22.091158 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'}) 2025-09-19 11:23:22.091170 | orchestrator | 2025-09-19 11:23:22.091181 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 11:23:22.091192 | orchestrator | Friday 19 September 2025 11:23:20 +0000 (0:00:01.376) 0:01:04.583 ****** 2025-09-19 11:23:22.091203 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:22.091214 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:22.091224 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.091235 | orchestrator | 2025-09-19 11:23:22.091245 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 11:23:22.091256 | orchestrator | Friday 19 September 2025 11:23:20 +0000 (0:00:00.207) 0:01:04.791 ****** 2025-09-19 11:23:22.091266 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.091277 | orchestrator | 2025-09-19 11:23:22.091287 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 11:23:22.091298 | orchestrator | Friday 19 September 2025 11:23:20 +0000 (0:00:00.189) 0:01:04.980 ****** 2025-09-19 11:23:22.091309 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:22.091325 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:22.091336 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.091347 | orchestrator | 2025-09-19 11:23:22.091357 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 11:23:22.091368 | orchestrator | Friday 19 September 2025 11:23:20 +0000 (0:00:00.185) 0:01:05.166 ****** 2025-09-19 11:23:22.091379 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.091389 | orchestrator | 2025-09-19 11:23:22.091400 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 11:23:22.091420 | orchestrator | Friday 19 September 2025 11:23:20 +0000 (0:00:00.151) 0:01:05.317 ****** 2025-09-19 11:23:22.091431 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:22.091442 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:22.091453 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.091463 | orchestrator | 2025-09-19 11:23:22.091474 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 11:23:22.091484 | orchestrator | Friday 19 September 2025 11:23:21 +0000 (0:00:00.191) 0:01:05.508 ****** 2025-09-19 11:23:22.091495 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.091505 | orchestrator | 2025-09-19 11:23:22.091516 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 11:23:22.091527 | orchestrator | Friday 19 September 2025 11:23:21 +0000 (0:00:00.152) 0:01:05.661 ****** 2025-09-19 11:23:22.091537 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:22.091548 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:22.091558 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:22.091569 | orchestrator | 2025-09-19 11:23:22.091580 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 11:23:22.091590 | orchestrator | Friday 19 September 2025 11:23:21 +0000 (0:00:00.157) 0:01:05.819 ****** 2025-09-19 11:23:22.091601 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:23:22.091611 | orchestrator | 2025-09-19 11:23:22.091622 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 11:23:22.091633 | orchestrator | Friday 19 September 2025 11:23:21 +0000 (0:00:00.137) 0:01:05.956 ****** 2025-09-19 11:23:22.091651 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:28.357109 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:28.357225 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.357255 | orchestrator | 2025-09-19 11:23:28.357270 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 11:23:28.357283 | orchestrator | Friday 19 September 2025 11:23:22 +0000 (0:00:00.481) 0:01:06.438 ****** 2025-09-19 11:23:28.357295 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:28.357306 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:28.357317 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.357328 | orchestrator | 2025-09-19 11:23:28.357340 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 11:23:28.357351 | orchestrator | Friday 19 September 2025 11:23:22 +0000 (0:00:00.161) 0:01:06.600 ****** 2025-09-19 11:23:28.357362 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:28.357373 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:28.357383 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.357394 | orchestrator | 2025-09-19 11:23:28.357428 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 11:23:28.357440 | orchestrator | Friday 19 September 2025 11:23:22 +0000 (0:00:00.156) 0:01:06.756 ****** 2025-09-19 11:23:28.357451 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.357462 | orchestrator | 2025-09-19 11:23:28.357473 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 11:23:28.357483 | orchestrator | Friday 19 September 2025 11:23:22 +0000 (0:00:00.141) 0:01:06.898 ****** 2025-09-19 11:23:28.357494 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.357505 | orchestrator | 2025-09-19 11:23:28.357516 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 11:23:28.357526 | orchestrator | Friday 19 September 2025 11:23:22 +0000 (0:00:00.139) 0:01:07.038 ****** 2025-09-19 11:23:28.357537 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.357548 | orchestrator | 2025-09-19 11:23:28.357558 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 11:23:28.357580 | orchestrator | Friday 19 September 2025 11:23:22 +0000 (0:00:00.129) 0:01:07.167 ****** 2025-09-19 11:23:28.357591 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:23:28.357602 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 11:23:28.357613 | orchestrator | } 2025-09-19 11:23:28.357624 | orchestrator | 2025-09-19 11:23:28.357635 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 11:23:28.357646 | orchestrator | Friday 19 September 2025 11:23:22 +0000 (0:00:00.132) 0:01:07.300 ****** 2025-09-19 11:23:28.357658 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:23:28.357670 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 11:23:28.357682 | orchestrator | } 2025-09-19 11:23:28.357694 | orchestrator | 2025-09-19 11:23:28.357706 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 11:23:28.357718 | orchestrator | Friday 19 September 2025 11:23:23 +0000 (0:00:00.153) 0:01:07.454 ****** 2025-09-19 11:23:28.357731 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:23:28.357743 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 11:23:28.357754 | orchestrator | } 2025-09-19 11:23:28.357767 | orchestrator | 2025-09-19 11:23:28.357779 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 11:23:28.357791 | orchestrator | Friday 19 September 2025 11:23:23 +0000 (0:00:00.153) 0:01:07.608 ****** 2025-09-19 11:23:28.357803 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:23:28.357836 | orchestrator | 2025-09-19 11:23:28.357848 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 11:23:28.357860 | orchestrator | Friday 19 September 2025 11:23:23 +0000 (0:00:00.516) 0:01:08.124 ****** 2025-09-19 11:23:28.357872 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:23:28.357885 | orchestrator | 2025-09-19 11:23:28.357897 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 11:23:28.357909 | orchestrator | Friday 19 September 2025 11:23:24 +0000 (0:00:00.532) 0:01:08.657 ****** 2025-09-19 11:23:28.357921 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:23:28.357932 | orchestrator | 2025-09-19 11:23:28.357944 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 11:23:28.357956 | orchestrator | Friday 19 September 2025 11:23:24 +0000 (0:00:00.528) 0:01:09.186 ****** 2025-09-19 11:23:28.357968 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:23:28.357980 | orchestrator | 2025-09-19 11:23:28.357992 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 11:23:28.358006 | orchestrator | Friday 19 September 2025 11:23:25 +0000 (0:00:00.380) 0:01:09.566 ****** 2025-09-19 11:23:28.358065 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358078 | orchestrator | 2025-09-19 11:23:28.358089 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 11:23:28.358112 | orchestrator | Friday 19 September 2025 11:23:25 +0000 (0:00:00.122) 0:01:09.688 ****** 2025-09-19 11:23:28.358123 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358154 | orchestrator | 2025-09-19 11:23:28.358166 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 11:23:28.358176 | orchestrator | Friday 19 September 2025 11:23:25 +0000 (0:00:00.107) 0:01:09.796 ****** 2025-09-19 11:23:28.358187 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:23:28.358198 | orchestrator |  "vgs_report": { 2025-09-19 11:23:28.358210 | orchestrator |  "vg": [] 2025-09-19 11:23:28.358238 | orchestrator |  } 2025-09-19 11:23:28.358250 | orchestrator | } 2025-09-19 11:23:28.358260 | orchestrator | 2025-09-19 11:23:28.358271 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 11:23:28.358282 | orchestrator | Friday 19 September 2025 11:23:25 +0000 (0:00:00.153) 0:01:09.949 ****** 2025-09-19 11:23:28.358293 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358303 | orchestrator | 2025-09-19 11:23:28.358314 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 11:23:28.358325 | orchestrator | Friday 19 September 2025 11:23:25 +0000 (0:00:00.145) 0:01:10.094 ****** 2025-09-19 11:23:28.358335 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358346 | orchestrator | 2025-09-19 11:23:28.358357 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 11:23:28.358367 | orchestrator | Friday 19 September 2025 11:23:25 +0000 (0:00:00.151) 0:01:10.246 ****** 2025-09-19 11:23:28.358378 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358389 | orchestrator | 2025-09-19 11:23:28.358399 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 11:23:28.358410 | orchestrator | Friday 19 September 2025 11:23:26 +0000 (0:00:00.149) 0:01:10.395 ****** 2025-09-19 11:23:28.358420 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358431 | orchestrator | 2025-09-19 11:23:28.358442 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 11:23:28.358452 | orchestrator | Friday 19 September 2025 11:23:26 +0000 (0:00:00.157) 0:01:10.553 ****** 2025-09-19 11:23:28.358463 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358474 | orchestrator | 2025-09-19 11:23:28.358484 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 11:23:28.358495 | orchestrator | Friday 19 September 2025 11:23:26 +0000 (0:00:00.177) 0:01:10.730 ****** 2025-09-19 11:23:28.358505 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358516 | orchestrator | 2025-09-19 11:23:28.358527 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 11:23:28.358538 | orchestrator | Friday 19 September 2025 11:23:26 +0000 (0:00:00.157) 0:01:10.888 ****** 2025-09-19 11:23:28.358548 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358559 | orchestrator | 2025-09-19 11:23:28.358569 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 11:23:28.358580 | orchestrator | Friday 19 September 2025 11:23:26 +0000 (0:00:00.137) 0:01:11.025 ****** 2025-09-19 11:23:28.358590 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358601 | orchestrator | 2025-09-19 11:23:28.358612 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 11:23:28.358622 | orchestrator | Friday 19 September 2025 11:23:26 +0000 (0:00:00.145) 0:01:11.171 ****** 2025-09-19 11:23:28.358633 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358643 | orchestrator | 2025-09-19 11:23:28.358654 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 11:23:28.358665 | orchestrator | Friday 19 September 2025 11:23:27 +0000 (0:00:00.338) 0:01:11.509 ****** 2025-09-19 11:23:28.358686 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358704 | orchestrator | 2025-09-19 11:23:28.358723 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 11:23:28.358739 | orchestrator | Friday 19 September 2025 11:23:27 +0000 (0:00:00.143) 0:01:11.652 ****** 2025-09-19 11:23:28.358754 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358770 | orchestrator | 2025-09-19 11:23:28.358786 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 11:23:28.358865 | orchestrator | Friday 19 September 2025 11:23:27 +0000 (0:00:00.125) 0:01:11.778 ****** 2025-09-19 11:23:28.358884 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358901 | orchestrator | 2025-09-19 11:23:28.358918 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 11:23:28.358937 | orchestrator | Friday 19 September 2025 11:23:27 +0000 (0:00:00.147) 0:01:11.925 ****** 2025-09-19 11:23:28.358955 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.358973 | orchestrator | 2025-09-19 11:23:28.358993 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 11:23:28.359011 | orchestrator | Friday 19 September 2025 11:23:27 +0000 (0:00:00.137) 0:01:12.062 ****** 2025-09-19 11:23:28.359031 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.359050 | orchestrator | 2025-09-19 11:23:28.359102 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 11:23:28.359121 | orchestrator | Friday 19 September 2025 11:23:27 +0000 (0:00:00.147) 0:01:12.210 ****** 2025-09-19 11:23:28.359133 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:28.359144 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:28.359155 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.359166 | orchestrator | 2025-09-19 11:23:28.359176 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 11:23:28.359187 | orchestrator | Friday 19 September 2025 11:23:28 +0000 (0:00:00.176) 0:01:12.387 ****** 2025-09-19 11:23:28.359197 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:28.359208 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:28.359219 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:28.359230 | orchestrator | 2025-09-19 11:23:28.359240 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 11:23:28.359251 | orchestrator | Friday 19 September 2025 11:23:28 +0000 (0:00:00.162) 0:01:12.550 ****** 2025-09-19 11:23:28.359274 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:31.408332 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:31.408426 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:31.408439 | orchestrator | 2025-09-19 11:23:31.408451 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 11:23:31.408463 | orchestrator | Friday 19 September 2025 11:23:28 +0000 (0:00:00.158) 0:01:12.708 ****** 2025-09-19 11:23:31.408473 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:31.408483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:31.408493 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:31.408502 | orchestrator | 2025-09-19 11:23:31.408512 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 11:23:31.408522 | orchestrator | Friday 19 September 2025 11:23:28 +0000 (0:00:00.155) 0:01:12.864 ****** 2025-09-19 11:23:31.408531 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:31.408563 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:31.408573 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:31.408583 | orchestrator | 2025-09-19 11:23:31.408592 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 11:23:31.408602 | orchestrator | Friday 19 September 2025 11:23:28 +0000 (0:00:00.153) 0:01:13.017 ****** 2025-09-19 11:23:31.408611 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:31.408621 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:31.408630 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:31.408640 | orchestrator | 2025-09-19 11:23:31.408649 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 11:23:31.408659 | orchestrator | Friday 19 September 2025 11:23:28 +0000 (0:00:00.151) 0:01:13.169 ****** 2025-09-19 11:23:31.408668 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:31.408678 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:31.408688 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:31.408697 | orchestrator | 2025-09-19 11:23:31.408706 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 11:23:31.408716 | orchestrator | Friday 19 September 2025 11:23:29 +0000 (0:00:00.375) 0:01:13.544 ****** 2025-09-19 11:23:31.408726 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:31.408735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:31.408745 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:31.408754 | orchestrator | 2025-09-19 11:23:31.408764 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 11:23:31.408773 | orchestrator | Friday 19 September 2025 11:23:29 +0000 (0:00:00.157) 0:01:13.701 ****** 2025-09-19 11:23:31.408783 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:23:31.408793 | orchestrator | 2025-09-19 11:23:31.408803 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 11:23:31.408849 | orchestrator | Friday 19 September 2025 11:23:29 +0000 (0:00:00.514) 0:01:14.215 ****** 2025-09-19 11:23:31.408859 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:23:31.408869 | orchestrator | 2025-09-19 11:23:31.408880 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 11:23:31.408890 | orchestrator | Friday 19 September 2025 11:23:30 +0000 (0:00:00.524) 0:01:14.740 ****** 2025-09-19 11:23:31.408901 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:23:31.408912 | orchestrator | 2025-09-19 11:23:31.408923 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 11:23:31.408933 | orchestrator | Friday 19 September 2025 11:23:30 +0000 (0:00:00.157) 0:01:14.898 ****** 2025-09-19 11:23:31.408943 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'vg_name': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'}) 2025-09-19 11:23:31.408955 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'vg_name': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'}) 2025-09-19 11:23:31.408966 | orchestrator | 2025-09-19 11:23:31.408976 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 11:23:31.408996 | orchestrator | Friday 19 September 2025 11:23:30 +0000 (0:00:00.174) 0:01:15.072 ****** 2025-09-19 11:23:31.409023 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:31.409035 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:31.409045 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:31.409056 | orchestrator | 2025-09-19 11:23:31.409067 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 11:23:31.409077 | orchestrator | Friday 19 September 2025 11:23:30 +0000 (0:00:00.161) 0:01:15.233 ****** 2025-09-19 11:23:31.409088 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:31.409101 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:31.409118 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:31.409135 | orchestrator | 2025-09-19 11:23:31.409153 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 11:23:31.409168 | orchestrator | Friday 19 September 2025 11:23:31 +0000 (0:00:00.184) 0:01:15.418 ****** 2025-09-19 11:23:31.409184 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'})  2025-09-19 11:23:31.409220 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'})  2025-09-19 11:23:31.409238 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:31.409254 | orchestrator | 2025-09-19 11:23:31.409265 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 11:23:31.409275 | orchestrator | Friday 19 September 2025 11:23:31 +0000 (0:00:00.159) 0:01:15.577 ****** 2025-09-19 11:23:31.409284 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:23:31.409294 | orchestrator |  "lvm_report": { 2025-09-19 11:23:31.409305 | orchestrator |  "lv": [ 2025-09-19 11:23:31.409314 | orchestrator |  { 2025-09-19 11:23:31.409324 | orchestrator |  "lv_name": "osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6", 2025-09-19 11:23:31.409335 | orchestrator |  "vg_name": "ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6" 2025-09-19 11:23:31.409344 | orchestrator |  }, 2025-09-19 11:23:31.409359 | orchestrator |  { 2025-09-19 11:23:31.409369 | orchestrator |  "lv_name": "osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd", 2025-09-19 11:23:31.409378 | orchestrator |  "vg_name": "ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd" 2025-09-19 11:23:31.409388 | orchestrator |  } 2025-09-19 11:23:31.409397 | orchestrator |  ], 2025-09-19 11:23:31.409407 | orchestrator |  "pv": [ 2025-09-19 11:23:31.409416 | orchestrator |  { 2025-09-19 11:23:31.409425 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 11:23:31.409435 | orchestrator |  "vg_name": "ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6" 2025-09-19 11:23:31.409444 | orchestrator |  }, 2025-09-19 11:23:31.409454 | orchestrator |  { 2025-09-19 11:23:31.409463 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 11:23:31.409473 | orchestrator |  "vg_name": "ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd" 2025-09-19 11:23:31.409483 | orchestrator |  } 2025-09-19 11:23:31.409492 | orchestrator |  ] 2025-09-19 11:23:31.409501 | orchestrator |  } 2025-09-19 11:23:31.409511 | orchestrator | } 2025-09-19 11:23:31.409521 | orchestrator | 2025-09-19 11:23:31.409531 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:23:31.409540 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 11:23:31.409558 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 11:23:31.409568 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 11:23:31.409577 | orchestrator | 2025-09-19 11:23:31.409587 | orchestrator | 2025-09-19 11:23:31.409596 | orchestrator | 2025-09-19 11:23:31.409606 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:23:31.409615 | orchestrator | Friday 19 September 2025 11:23:31 +0000 (0:00:00.159) 0:01:15.737 ****** 2025-09-19 11:23:31.409625 | orchestrator | =============================================================================== 2025-09-19 11:23:31.409634 | orchestrator | Create block VGs -------------------------------------------------------- 5.72s 2025-09-19 11:23:31.409644 | orchestrator | Create block LVs -------------------------------------------------------- 4.19s 2025-09-19 11:23:31.409653 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.93s 2025-09-19 11:23:31.409663 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.65s 2025-09-19 11:23:31.409672 | orchestrator | Add known partitions to the list of available block devices ------------- 1.65s 2025-09-19 11:23:31.409681 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.62s 2025-09-19 11:23:31.409691 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.59s 2025-09-19 11:23:31.409700 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2025-09-19 11:23:31.409717 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2025-09-19 11:23:31.946179 | orchestrator | Print LVM report data --------------------------------------------------- 1.12s 2025-09-19 11:23:31.946275 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2025-09-19 11:23:31.946290 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2025-09-19 11:23:31.946301 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.87s 2025-09-19 11:23:31.946313 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2025-09-19 11:23:31.946323 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.84s 2025-09-19 11:23:31.946334 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.82s 2025-09-19 11:23:31.946345 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2025-09-19 11:23:31.946356 | orchestrator | Create DB+WAL VGs ------------------------------------------------------- 0.76s 2025-09-19 11:23:31.946366 | orchestrator | Print size needed for LVs on ceph_wal_devices --------------------------- 0.76s 2025-09-19 11:23:31.946377 | orchestrator | Check whether ceph_db_wal_devices is used exclusively ------------------- 0.75s 2025-09-19 11:23:44.302734 | orchestrator | 2025-09-19 11:23:44 | INFO  | Task 1b23098c-f67d-4aa9-8be9-c82965ce589d (facts) was prepared for execution. 2025-09-19 11:23:44.302876 | orchestrator | 2025-09-19 11:23:44 | INFO  | It takes a moment until task 1b23098c-f67d-4aa9-8be9-c82965ce589d (facts) has been started and output is visible here. 2025-09-19 11:23:56.235327 | orchestrator | 2025-09-19 11:23:56.235427 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 11:23:56.235443 | orchestrator | 2025-09-19 11:23:56.235455 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 11:23:56.235466 | orchestrator | Friday 19 September 2025 11:23:48 +0000 (0:00:00.279) 0:00:00.279 ****** 2025-09-19 11:23:56.235477 | orchestrator | ok: [testbed-manager] 2025-09-19 11:23:56.235489 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:23:56.235500 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:23:56.235538 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:23:56.235565 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:23:56.235576 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:23:56.235587 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:23:56.235597 | orchestrator | 2025-09-19 11:23:56.235608 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 11:23:56.235619 | orchestrator | Friday 19 September 2025 11:23:49 +0000 (0:00:01.155) 0:00:01.434 ****** 2025-09-19 11:23:56.235630 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:23:56.235657 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:23:56.235668 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:23:56.235680 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:23:56.235690 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:23:56.235701 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:56.235711 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:56.235722 | orchestrator | 2025-09-19 11:23:56.235733 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 11:23:56.235743 | orchestrator | 2025-09-19 11:23:56.235754 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 11:23:56.235765 | orchestrator | Friday 19 September 2025 11:23:50 +0000 (0:00:01.258) 0:00:02.693 ****** 2025-09-19 11:23:56.235775 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:23:56.235839 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:23:56.235854 | orchestrator | ok: [testbed-manager] 2025-09-19 11:23:56.235864 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:23:56.235877 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:23:56.235889 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:23:56.235901 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:23:56.235912 | orchestrator | 2025-09-19 11:23:56.235925 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 11:23:56.235937 | orchestrator | 2025-09-19 11:23:56.235949 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 11:23:56.235961 | orchestrator | Friday 19 September 2025 11:23:55 +0000 (0:00:04.815) 0:00:07.508 ****** 2025-09-19 11:23:56.235972 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:23:56.235984 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:23:56.235997 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:23:56.236008 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:23:56.236020 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:23:56.236032 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:23:56.236044 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:23:56.236056 | orchestrator | 2025-09-19 11:23:56.236067 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:23:56.236081 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:23:56.236095 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:23:56.236107 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:23:56.236119 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:23:56.236130 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:23:56.236143 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:23:56.236155 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:23:56.236167 | orchestrator | 2025-09-19 11:23:56.236179 | orchestrator | 2025-09-19 11:23:56.236201 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:23:56.236213 | orchestrator | Friday 19 September 2025 11:23:55 +0000 (0:00:00.498) 0:00:08.006 ****** 2025-09-19 11:23:56.236225 | orchestrator | =============================================================================== 2025-09-19 11:23:56.236237 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.82s 2025-09-19 11:23:56.236250 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2025-09-19 11:23:56.236260 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.16s 2025-09-19 11:23:56.236271 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-09-19 11:24:08.427883 | orchestrator | 2025-09-19 11:24:08 | INFO  | Task 4b514ad2-2f8f-48b8-acb7-e68586d745b0 (frr) was prepared for execution. 2025-09-19 11:24:08.427994 | orchestrator | 2025-09-19 11:24:08 | INFO  | It takes a moment until task 4b514ad2-2f8f-48b8-acb7-e68586d745b0 (frr) has been started and output is visible here. 2025-09-19 11:24:34.217548 | orchestrator | 2025-09-19 11:24:34.217615 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-19 11:24:34.217621 | orchestrator | 2025-09-19 11:24:34.217626 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-19 11:24:34.217631 | orchestrator | Friday 19 September 2025 11:24:12 +0000 (0:00:00.216) 0:00:00.216 ****** 2025-09-19 11:24:34.217636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 11:24:34.217641 | orchestrator | 2025-09-19 11:24:34.217645 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-19 11:24:34.217649 | orchestrator | Friday 19 September 2025 11:24:12 +0000 (0:00:00.200) 0:00:00.416 ****** 2025-09-19 11:24:34.217653 | orchestrator | changed: [testbed-manager] 2025-09-19 11:24:34.217657 | orchestrator | 2025-09-19 11:24:34.217661 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-19 11:24:34.217665 | orchestrator | Friday 19 September 2025 11:24:13 +0000 (0:00:01.049) 0:00:01.466 ****** 2025-09-19 11:24:34.217669 | orchestrator | changed: [testbed-manager] 2025-09-19 11:24:34.217672 | orchestrator | 2025-09-19 11:24:34.217676 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-19 11:24:34.217685 | orchestrator | Friday 19 September 2025 11:24:23 +0000 (0:00:09.803) 0:00:11.270 ****** 2025-09-19 11:24:34.217689 | orchestrator | ok: [testbed-manager] 2025-09-19 11:24:34.217694 | orchestrator | 2025-09-19 11:24:34.217698 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-19 11:24:34.217701 | orchestrator | Friday 19 September 2025 11:24:24 +0000 (0:00:01.235) 0:00:12.506 ****** 2025-09-19 11:24:34.217705 | orchestrator | changed: [testbed-manager] 2025-09-19 11:24:34.217709 | orchestrator | 2025-09-19 11:24:34.217713 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-19 11:24:34.217716 | orchestrator | Friday 19 September 2025 11:24:25 +0000 (0:00:00.865) 0:00:13.371 ****** 2025-09-19 11:24:34.217720 | orchestrator | ok: [testbed-manager] 2025-09-19 11:24:34.217724 | orchestrator | 2025-09-19 11:24:34.217728 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-19 11:24:34.217732 | orchestrator | Friday 19 September 2025 11:24:26 +0000 (0:00:01.211) 0:00:14.582 ****** 2025-09-19 11:24:34.217736 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:24:34.217740 | orchestrator | 2025-09-19 11:24:34.217744 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-19 11:24:34.217747 | orchestrator | Friday 19 September 2025 11:24:27 +0000 (0:00:00.867) 0:00:15.450 ****** 2025-09-19 11:24:34.217791 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:24:34.217795 | orchestrator | 2025-09-19 11:24:34.217798 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-19 11:24:34.217815 | orchestrator | Friday 19 September 2025 11:24:27 +0000 (0:00:00.181) 0:00:15.631 ****** 2025-09-19 11:24:34.217819 | orchestrator | changed: [testbed-manager] 2025-09-19 11:24:34.217823 | orchestrator | 2025-09-19 11:24:34.217826 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-19 11:24:34.217830 | orchestrator | Friday 19 September 2025 11:24:28 +0000 (0:00:00.958) 0:00:16.589 ****** 2025-09-19 11:24:34.217834 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-19 11:24:34.217838 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-19 11:24:34.217843 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-19 11:24:34.217847 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-19 11:24:34.217850 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-19 11:24:34.217854 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-19 11:24:34.217858 | orchestrator | 2025-09-19 11:24:34.217862 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-19 11:24:34.217866 | orchestrator | Friday 19 September 2025 11:24:30 +0000 (0:00:02.324) 0:00:18.913 ****** 2025-09-19 11:24:34.217869 | orchestrator | ok: [testbed-manager] 2025-09-19 11:24:34.217873 | orchestrator | 2025-09-19 11:24:34.217877 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-19 11:24:34.217881 | orchestrator | Friday 19 September 2025 11:24:32 +0000 (0:00:01.408) 0:00:20.321 ****** 2025-09-19 11:24:34.217885 | orchestrator | changed: [testbed-manager] 2025-09-19 11:24:34.217888 | orchestrator | 2025-09-19 11:24:34.217892 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:24:34.217897 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:24:34.217900 | orchestrator | 2025-09-19 11:24:34.217904 | orchestrator | 2025-09-19 11:24:34.217908 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:24:34.217912 | orchestrator | Friday 19 September 2025 11:24:33 +0000 (0:00:01.448) 0:00:21.770 ****** 2025-09-19 11:24:34.217916 | orchestrator | =============================================================================== 2025-09-19 11:24:34.217919 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.80s 2025-09-19 11:24:34.217923 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.32s 2025-09-19 11:24:34.217927 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.45s 2025-09-19 11:24:34.217931 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.41s 2025-09-19 11:24:34.217943 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.24s 2025-09-19 11:24:34.217947 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.21s 2025-09-19 11:24:34.217951 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.05s 2025-09-19 11:24:34.217955 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.96s 2025-09-19 11:24:34.217958 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.87s 2025-09-19 11:24:34.217962 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.87s 2025-09-19 11:24:34.217966 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2025-09-19 11:24:34.217970 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.18s 2025-09-19 11:24:34.567708 | orchestrator | 2025-09-19 11:24:34.569659 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Sep 19 11:24:34 UTC 2025 2025-09-19 11:24:34.569672 | orchestrator | 2025-09-19 11:24:36.398891 | orchestrator | 2025-09-19 11:24:36 | INFO  | Collection nutshell is prepared for execution 2025-09-19 11:24:36.398973 | orchestrator | 2025-09-19 11:24:36 | INFO  | D [0] - dotfiles 2025-09-19 11:24:46.435335 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [0] - homer 2025-09-19 11:24:46.435428 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [0] - netdata 2025-09-19 11:24:46.435442 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [0] - openstackclient 2025-09-19 11:24:46.435453 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [0] - phpmyadmin 2025-09-19 11:24:46.435463 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [0] - common 2025-09-19 11:24:46.439584 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [1] -- loadbalancer 2025-09-19 11:24:46.439871 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [2] --- opensearch 2025-09-19 11:24:46.440018 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [2] --- mariadb-ng 2025-09-19 11:24:46.440053 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [3] ---- horizon 2025-09-19 11:24:46.440065 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [3] ---- keystone 2025-09-19 11:24:46.440416 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [4] ----- neutron 2025-09-19 11:24:46.440810 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [5] ------ wait-for-nova 2025-09-19 11:24:46.440837 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [5] ------ octavia 2025-09-19 11:24:46.443970 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [4] ----- barbican 2025-09-19 11:24:46.444005 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [4] ----- designate 2025-09-19 11:24:46.444026 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [4] ----- ironic 2025-09-19 11:24:46.444159 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [4] ----- placement 2025-09-19 11:24:46.444176 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [4] ----- magnum 2025-09-19 11:24:46.444674 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [1] -- openvswitch 2025-09-19 11:24:46.445106 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [2] --- ovn 2025-09-19 11:24:46.445127 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [1] -- memcached 2025-09-19 11:24:46.445392 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [1] -- redis 2025-09-19 11:24:46.445413 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [1] -- rabbitmq-ng 2025-09-19 11:24:46.445724 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [0] - kubernetes 2025-09-19 11:24:46.448541 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [1] -- kubeconfig 2025-09-19 11:24:46.448579 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [1] -- copy-kubeconfig 2025-09-19 11:24:46.448804 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [0] - ceph 2025-09-19 11:24:46.451206 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [1] -- ceph-pools 2025-09-19 11:24:46.451254 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [2] --- copy-ceph-keys 2025-09-19 11:24:46.451268 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [3] ---- cephclient 2025-09-19 11:24:46.451613 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-19 11:24:46.451711 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [4] ----- wait-for-keystone 2025-09-19 11:24:46.451727 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-19 11:24:46.451773 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [5] ------ glance 2025-09-19 11:24:46.451785 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [5] ------ cinder 2025-09-19 11:24:46.452108 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [5] ------ nova 2025-09-19 11:24:46.452267 | orchestrator | 2025-09-19 11:24:46 | INFO  | A [4] ----- prometheus 2025-09-19 11:24:46.452308 | orchestrator | 2025-09-19 11:24:46 | INFO  | D [5] ------ grafana 2025-09-19 11:24:46.651510 | orchestrator | 2025-09-19 11:24:46 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-19 11:24:46.651601 | orchestrator | 2025-09-19 11:24:46 | INFO  | Tasks are running in the background 2025-09-19 11:24:49.740517 | orchestrator | 2025-09-19 11:24:49 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-19 11:24:51.866264 | orchestrator | 2025-09-19 11:24:51 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:24:51.867073 | orchestrator | 2025-09-19 11:24:51 | INFO  | Task b7e68095-1ec6-4c23-acf7-9ec80c538cd3 is in state STARTED 2025-09-19 11:24:51.869796 | orchestrator | 2025-09-19 11:24:51 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:24:51.870287 | orchestrator | 2025-09-19 11:24:51 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:24:51.870908 | orchestrator | 2025-09-19 11:24:51 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:24:51.871788 | orchestrator | 2025-09-19 11:24:51 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:24:51.872280 | orchestrator | 2025-09-19 11:24:51 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:24:51.872386 | orchestrator | 2025-09-19 11:24:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:24:54.945505 | orchestrator | 2025-09-19 11:24:54 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:24:54.945886 | orchestrator | 2025-09-19 11:24:54 | INFO  | Task b7e68095-1ec6-4c23-acf7-9ec80c538cd3 is in state STARTED 2025-09-19 11:24:54.946874 | orchestrator | 2025-09-19 11:24:54 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:24:54.947497 | orchestrator | 2025-09-19 11:24:54 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:24:54.948279 | orchestrator | 2025-09-19 11:24:54 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:24:54.948909 | orchestrator | 2025-09-19 11:24:54 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:24:54.949396 | orchestrator | 2025-09-19 11:24:54 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:24:54.949560 | orchestrator | 2025-09-19 11:24:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:24:58.016186 | orchestrator | 2025-09-19 11:24:58 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:24:58.016545 | orchestrator | 2025-09-19 11:24:58 | INFO  | Task b7e68095-1ec6-4c23-acf7-9ec80c538cd3 is in state STARTED 2025-09-19 11:24:58.019768 | orchestrator | 2025-09-19 11:24:58 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:24:58.021517 | orchestrator | 2025-09-19 11:24:58 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:24:58.021539 | orchestrator | 2025-09-19 11:24:58 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:24:58.021551 | orchestrator | 2025-09-19 11:24:58 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:24:58.021901 | orchestrator | 2025-09-19 11:24:58 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:24:58.022100 | orchestrator | 2025-09-19 11:24:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:01.108783 | orchestrator | 2025-09-19 11:25:01 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:01.108885 | orchestrator | 2025-09-19 11:25:01 | INFO  | Task b7e68095-1ec6-4c23-acf7-9ec80c538cd3 is in state STARTED 2025-09-19 11:25:01.108901 | orchestrator | 2025-09-19 11:25:01 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:01.108912 | orchestrator | 2025-09-19 11:25:01 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:01.108923 | orchestrator | 2025-09-19 11:25:01 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:01.108934 | orchestrator | 2025-09-19 11:25:01 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:01.108944 | orchestrator | 2025-09-19 11:25:01 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:25:01.108955 | orchestrator | 2025-09-19 11:25:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:04.197075 | orchestrator | 2025-09-19 11:25:04 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:04.197166 | orchestrator | 2025-09-19 11:25:04 | INFO  | Task b7e68095-1ec6-4c23-acf7-9ec80c538cd3 is in state STARTED 2025-09-19 11:25:04.197568 | orchestrator | 2025-09-19 11:25:04 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:04.197907 | orchestrator | 2025-09-19 11:25:04 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:04.198609 | orchestrator | 2025-09-19 11:25:04 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:04.200003 | orchestrator | 2025-09-19 11:25:04 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:04.200443 | orchestrator | 2025-09-19 11:25:04 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:25:04.200627 | orchestrator | 2025-09-19 11:25:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:07.257168 | orchestrator | 2025-09-19 11:25:07 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:07.257314 | orchestrator | 2025-09-19 11:25:07 | INFO  | Task b7e68095-1ec6-4c23-acf7-9ec80c538cd3 is in state STARTED 2025-09-19 11:25:07.257328 | orchestrator | 2025-09-19 11:25:07 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:07.257335 | orchestrator | 2025-09-19 11:25:07 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:07.257341 | orchestrator | 2025-09-19 11:25:07 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:07.257348 | orchestrator | 2025-09-19 11:25:07 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:07.258206 | orchestrator | 2025-09-19 11:25:07 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:25:07.258271 | orchestrator | 2025-09-19 11:25:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:10.423132 | orchestrator | 2025-09-19 11:25:10 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:10.423244 | orchestrator | 2025-09-19 11:25:10 | INFO  | Task b7e68095-1ec6-4c23-acf7-9ec80c538cd3 is in state STARTED 2025-09-19 11:25:10.423268 | orchestrator | 2025-09-19 11:25:10 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:10.423288 | orchestrator | 2025-09-19 11:25:10 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:10.423335 | orchestrator | 2025-09-19 11:25:10 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:10.423355 | orchestrator | 2025-09-19 11:25:10 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:10.423373 | orchestrator | 2025-09-19 11:25:10 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:25:10.423389 | orchestrator | 2025-09-19 11:25:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:13.469681 | orchestrator | 2025-09-19 11:25:13 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:13.469801 | orchestrator | 2025-09-19 11:25:13 | INFO  | Task b7e68095-1ec6-4c23-acf7-9ec80c538cd3 is in state STARTED 2025-09-19 11:25:13.469817 | orchestrator | 2025-09-19 11:25:13 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:13.469829 | orchestrator | 2025-09-19 11:25:13 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:13.469839 | orchestrator | 2025-09-19 11:25:13 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:13.469850 | orchestrator | 2025-09-19 11:25:13 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:13.469861 | orchestrator | 2025-09-19 11:25:13 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:25:13.469872 | orchestrator | 2025-09-19 11:25:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:16.571645 | orchestrator | 2025-09-19 11:25:16 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:16.581348 | orchestrator | 2025-09-19 11:25:16 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:16.586287 | orchestrator | 2025-09-19 11:25:16.586327 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-19 11:25:16.586340 | orchestrator | 2025-09-19 11:25:16.586352 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-19 11:25:16.586363 | orchestrator | Friday 19 September 2025 11:25:00 +0000 (0:00:00.876) 0:00:00.876 ****** 2025-09-19 11:25:16.586374 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:25:16.586386 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:25:16.586397 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:25:16.586408 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:25:16.586418 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:25:16.586429 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:25:16.586454 | orchestrator | changed: [testbed-manager] 2025-09-19 11:25:16.586466 | orchestrator | 2025-09-19 11:25:16.586477 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-19 11:25:16.586488 | orchestrator | Friday 19 September 2025 11:25:03 +0000 (0:00:03.762) 0:00:04.639 ****** 2025-09-19 11:25:16.586499 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 11:25:16.586510 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 11:25:16.586522 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-19 11:25:16.586533 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 11:25:16.586544 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 11:25:16.586555 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 11:25:16.586566 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 11:25:16.586576 | orchestrator | 2025-09-19 11:25:16.586587 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-19 11:25:16.586598 | orchestrator | Friday 19 September 2025 11:25:05 +0000 (0:00:01.283) 0:00:05.922 ****** 2025-09-19 11:25:16.586622 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:25:04.476380', 'end': '2025-09-19 11:25:04.483242', 'delta': '0:00:00.006862', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:25:16.586660 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:25:04.499609', 'end': '2025-09-19 11:25:04.505223', 'delta': '0:00:00.005614', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:25:16.586674 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:25:04.493113', 'end': '2025-09-19 11:25:04.500179', 'delta': '0:00:00.007066', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:25:16.586699 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:25:04.485313', 'end': '2025-09-19 11:25:04.492313', 'delta': '0:00:00.007000', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:25:16.586741 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:25:04.612160', 'end': '2025-09-19 11:25:04.618045', 'delta': '0:00:00.005885', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:25:16.586773 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:25:04.730574', 'end': '2025-09-19 11:25:04.743726', 'delta': '0:00:00.013152', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:25:16.586785 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 11:25:04.745219', 'end': '2025-09-19 11:25:04.755702', 'delta': '0:00:00.010483', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 11:25:16.586796 | orchestrator | 2025-09-19 11:25:16.586808 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-19 11:25:16.586819 | orchestrator | Friday 19 September 2025 11:25:07 +0000 (0:00:02.823) 0:00:08.747 ****** 2025-09-19 11:25:16.586829 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 11:25:16.586840 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-19 11:25:16.586851 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 11:25:16.586861 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 11:25:16.586872 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 11:25:16.586882 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 11:25:16.586893 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 11:25:16.586903 | orchestrator | 2025-09-19 11:25:16.586916 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-19 11:25:16.586928 | orchestrator | Friday 19 September 2025 11:25:09 +0000 (0:00:01.980) 0:00:10.728 ****** 2025-09-19 11:25:16.586940 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 11:25:16.586952 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 11:25:16.586964 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 11:25:16.586976 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-19 11:25:16.586988 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 11:25:16.587000 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 11:25:16.587012 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 11:25:16.587024 | orchestrator | 2025-09-19 11:25:16.587036 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:25:16.587055 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:25:16.587069 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:25:16.587081 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:25:16.587102 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:25:16.587115 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:25:16.587127 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:25:16.587138 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:25:16.587150 | orchestrator | 2025-09-19 11:25:16.587256 | orchestrator | 2025-09-19 11:25:16.587276 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:25:16.587287 | orchestrator | Friday 19 September 2025 11:25:13 +0000 (0:00:03.532) 0:00:14.261 ****** 2025-09-19 11:25:16.587303 | orchestrator | =============================================================================== 2025-09-19 11:25:16.587314 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.76s 2025-09-19 11:25:16.587325 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.53s 2025-09-19 11:25:16.587336 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.82s 2025-09-19 11:25:16.587347 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.98s 2025-09-19 11:25:16.587357 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.28s 2025-09-19 11:25:16.587368 | orchestrator | 2025-09-19 11:25:16 | INFO  | Task b7e68095-1ec6-4c23-acf7-9ec80c538cd3 is in state SUCCESS 2025-09-19 11:25:16.587379 | orchestrator | 2025-09-19 11:25:16 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:16.587395 | orchestrator | 2025-09-19 11:25:16 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:16.587806 | orchestrator | 2025-09-19 11:25:16 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:16.588392 | orchestrator | 2025-09-19 11:25:16 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:16.588903 | orchestrator | 2025-09-19 11:25:16 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:25:16.588924 | orchestrator | 2025-09-19 11:25:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:19.740915 | orchestrator | 2025-09-19 11:25:19 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:19.741022 | orchestrator | 2025-09-19 11:25:19 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:19.741045 | orchestrator | 2025-09-19 11:25:19 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:19.741074 | orchestrator | 2025-09-19 11:25:19 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:19.741106 | orchestrator | 2025-09-19 11:25:19 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:19.741128 | orchestrator | 2025-09-19 11:25:19 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:19.741147 | orchestrator | 2025-09-19 11:25:19 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:25:19.741163 | orchestrator | 2025-09-19 11:25:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:22.861379 | orchestrator | 2025-09-19 11:25:22 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:22.861549 | orchestrator | 2025-09-19 11:25:22 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:22.861602 | orchestrator | 2025-09-19 11:25:22 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:22.862240 | orchestrator | 2025-09-19 11:25:22 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:22.864132 | orchestrator | 2025-09-19 11:25:22 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:22.864401 | orchestrator | 2025-09-19 11:25:22 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:22.865890 | orchestrator | 2025-09-19 11:25:22 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:25:22.865928 | orchestrator | 2025-09-19 11:25:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:25.896503 | orchestrator | 2025-09-19 11:25:25 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:25.897541 | orchestrator | 2025-09-19 11:25:25 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:25.899620 | orchestrator | 2025-09-19 11:25:25 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:25.901273 | orchestrator | 2025-09-19 11:25:25 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:25.901827 | orchestrator | 2025-09-19 11:25:25 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:25.902418 | orchestrator | 2025-09-19 11:25:25 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:25.904074 | orchestrator | 2025-09-19 11:25:25 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:25:25.904108 | orchestrator | 2025-09-19 11:25:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:29.206508 | orchestrator | 2025-09-19 11:25:29 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:29.207594 | orchestrator | 2025-09-19 11:25:29 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:29.209110 | orchestrator | 2025-09-19 11:25:29 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:29.210848 | orchestrator | 2025-09-19 11:25:29 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:29.212510 | orchestrator | 2025-09-19 11:25:29 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:29.213665 | orchestrator | 2025-09-19 11:25:29 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:29.215380 | orchestrator | 2025-09-19 11:25:29 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:25:29.215556 | orchestrator | 2025-09-19 11:25:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:32.263256 | orchestrator | 2025-09-19 11:25:32 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:32.289285 | orchestrator | 2025-09-19 11:25:32 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:32.316570 | orchestrator | 2025-09-19 11:25:32 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:32.318968 | orchestrator | 2025-09-19 11:25:32 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:32.319873 | orchestrator | 2025-09-19 11:25:32 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:32.327905 | orchestrator | 2025-09-19 11:25:32 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:32.334827 | orchestrator | 2025-09-19 11:25:32 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:25:32.334962 | orchestrator | 2025-09-19 11:25:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:35.376669 | orchestrator | 2025-09-19 11:25:35 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:35.377012 | orchestrator | 2025-09-19 11:25:35 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:35.378614 | orchestrator | 2025-09-19 11:25:35 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:35.379252 | orchestrator | 2025-09-19 11:25:35 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:35.380964 | orchestrator | 2025-09-19 11:25:35 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:35.382077 | orchestrator | 2025-09-19 11:25:35 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:35.383141 | orchestrator | 2025-09-19 11:25:35 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state STARTED 2025-09-19 11:25:35.383188 | orchestrator | 2025-09-19 11:25:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:38.439258 | orchestrator | 2025-09-19 11:25:38 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:38.440948 | orchestrator | 2025-09-19 11:25:38 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:38.442091 | orchestrator | 2025-09-19 11:25:38 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:38.444342 | orchestrator | 2025-09-19 11:25:38 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:38.444894 | orchestrator | 2025-09-19 11:25:38 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:38.445882 | orchestrator | 2025-09-19 11:25:38 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:38.446375 | orchestrator | 2025-09-19 11:25:38 | INFO  | Task 1f176782-5470-4482-8550-f2966c70caac is in state SUCCESS 2025-09-19 11:25:38.446397 | orchestrator | 2025-09-19 11:25:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:41.486909 | orchestrator | 2025-09-19 11:25:41 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:41.487308 | orchestrator | 2025-09-19 11:25:41 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:41.488084 | orchestrator | 2025-09-19 11:25:41 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:41.489824 | orchestrator | 2025-09-19 11:25:41 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:41.490360 | orchestrator | 2025-09-19 11:25:41 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:41.491207 | orchestrator | 2025-09-19 11:25:41 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:41.491224 | orchestrator | 2025-09-19 11:25:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:44.535262 | orchestrator | 2025-09-19 11:25:44 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:44.535346 | orchestrator | 2025-09-19 11:25:44 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:44.535359 | orchestrator | 2025-09-19 11:25:44 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:44.535391 | orchestrator | 2025-09-19 11:25:44 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:44.536174 | orchestrator | 2025-09-19 11:25:44 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state STARTED 2025-09-19 11:25:44.536370 | orchestrator | 2025-09-19 11:25:44 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:44.536401 | orchestrator | 2025-09-19 11:25:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:47.655190 | orchestrator | 2025-09-19 11:25:47 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:47.655428 | orchestrator | 2025-09-19 11:25:47 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:47.657455 | orchestrator | 2025-09-19 11:25:47 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:47.657612 | orchestrator | 2025-09-19 11:25:47 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:47.658360 | orchestrator | 2025-09-19 11:25:47 | INFO  | Task 75f49c87-fff9-4c71-ab3a-e6617ab407ec is in state SUCCESS 2025-09-19 11:25:47.658856 | orchestrator | 2025-09-19 11:25:47 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:47.658880 | orchestrator | 2025-09-19 11:25:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:50.713016 | orchestrator | 2025-09-19 11:25:50 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:50.713114 | orchestrator | 2025-09-19 11:25:50 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:50.714851 | orchestrator | 2025-09-19 11:25:50 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:50.717732 | orchestrator | 2025-09-19 11:25:50 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:50.718382 | orchestrator | 2025-09-19 11:25:50 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:50.718428 | orchestrator | 2025-09-19 11:25:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:53.778243 | orchestrator | 2025-09-19 11:25:53 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:53.779238 | orchestrator | 2025-09-19 11:25:53 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:53.779748 | orchestrator | 2025-09-19 11:25:53 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:53.782415 | orchestrator | 2025-09-19 11:25:53 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:53.783351 | orchestrator | 2025-09-19 11:25:53 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:53.783535 | orchestrator | 2025-09-19 11:25:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:56.859424 | orchestrator | 2025-09-19 11:25:56 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:56.862902 | orchestrator | 2025-09-19 11:25:56 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:56.873846 | orchestrator | 2025-09-19 11:25:56 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:56.873963 | orchestrator | 2025-09-19 11:25:56 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:56.873987 | orchestrator | 2025-09-19 11:25:56 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:56.874089 | orchestrator | 2025-09-19 11:25:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:25:59.957834 | orchestrator | 2025-09-19 11:25:59 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:25:59.959987 | orchestrator | 2025-09-19 11:25:59 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:25:59.965423 | orchestrator | 2025-09-19 11:25:59 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:25:59.966563 | orchestrator | 2025-09-19 11:25:59 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:25:59.969265 | orchestrator | 2025-09-19 11:25:59 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:25:59.970786 | orchestrator | 2025-09-19 11:25:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:03.031603 | orchestrator | 2025-09-19 11:26:03 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:03.031838 | orchestrator | 2025-09-19 11:26:03 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:26:03.033056 | orchestrator | 2025-09-19 11:26:03 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:03.035442 | orchestrator | 2025-09-19 11:26:03 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:26:03.036027 | orchestrator | 2025-09-19 11:26:03 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:03.036053 | orchestrator | 2025-09-19 11:26:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:06.065895 | orchestrator | 2025-09-19 11:26:06 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:06.066797 | orchestrator | 2025-09-19 11:26:06 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:26:06.067798 | orchestrator | 2025-09-19 11:26:06 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:06.069580 | orchestrator | 2025-09-19 11:26:06 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:26:06.069605 | orchestrator | 2025-09-19 11:26:06 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:06.069616 | orchestrator | 2025-09-19 11:26:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:09.114803 | orchestrator | 2025-09-19 11:26:09 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:09.114892 | orchestrator | 2025-09-19 11:26:09 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:26:09.117378 | orchestrator | 2025-09-19 11:26:09 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:09.122616 | orchestrator | 2025-09-19 11:26:09 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:26:09.125436 | orchestrator | 2025-09-19 11:26:09 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:09.125478 | orchestrator | 2025-09-19 11:26:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:12.280694 | orchestrator | 2025-09-19 11:26:12 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:12.280754 | orchestrator | 2025-09-19 11:26:12 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:26:12.280760 | orchestrator | 2025-09-19 11:26:12 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:12.280778 | orchestrator | 2025-09-19 11:26:12 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:26:12.280783 | orchestrator | 2025-09-19 11:26:12 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:12.280788 | orchestrator | 2025-09-19 11:26:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:15.213558 | orchestrator | 2025-09-19 11:26:15 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:15.214140 | orchestrator | 2025-09-19 11:26:15 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state STARTED 2025-09-19 11:26:15.214757 | orchestrator | 2025-09-19 11:26:15 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:15.216453 | orchestrator | 2025-09-19 11:26:15 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state STARTED 2025-09-19 11:26:15.217338 | orchestrator | 2025-09-19 11:26:15 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:15.217458 | orchestrator | 2025-09-19 11:26:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:18.264347 | orchestrator | 2025-09-19 11:26:18 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:18.264528 | orchestrator | 2025-09-19 11:26:18 | INFO  | Task ead3e4f1-6742-4532-98e7-de91aed2a076 is in state SUCCESS 2025-09-19 11:26:18.265562 | orchestrator | 2025-09-19 11:26:18.265607 | orchestrator | 2025-09-19 11:26:18.265620 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-19 11:26:18.265632 | orchestrator | 2025-09-19 11:26:18.265687 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-19 11:26:18.265701 | orchestrator | Friday 19 September 2025 11:24:58 +0000 (0:00:00.683) 0:00:00.683 ****** 2025-09-19 11:26:18.265712 | orchestrator | ok: [testbed-manager] => { 2025-09-19 11:26:18.265726 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-19 11:26:18.265739 | orchestrator | } 2025-09-19 11:26:18.265750 | orchestrator | 2025-09-19 11:26:18.265761 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-19 11:26:18.265772 | orchestrator | Friday 19 September 2025 11:24:59 +0000 (0:00:00.474) 0:00:01.157 ****** 2025-09-19 11:26:18.265783 | orchestrator | ok: [testbed-manager] 2025-09-19 11:26:18.265794 | orchestrator | 2025-09-19 11:26:18.265805 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-19 11:26:18.265816 | orchestrator | Friday 19 September 2025 11:25:00 +0000 (0:00:01.659) 0:00:02.817 ****** 2025-09-19 11:26:18.265835 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-19 11:26:18.265846 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-19 11:26:18.265857 | orchestrator | 2025-09-19 11:26:18.265868 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-19 11:26:18.265879 | orchestrator | Friday 19 September 2025 11:25:02 +0000 (0:00:01.718) 0:00:04.536 ****** 2025-09-19 11:26:18.265890 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.265900 | orchestrator | 2025-09-19 11:26:18.265911 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-19 11:26:18.265922 | orchestrator | Friday 19 September 2025 11:25:05 +0000 (0:00:02.559) 0:00:07.096 ****** 2025-09-19 11:26:18.265933 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.265944 | orchestrator | 2025-09-19 11:26:18.265955 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-19 11:26:18.265965 | orchestrator | Friday 19 September 2025 11:25:07 +0000 (0:00:02.525) 0:00:09.621 ****** 2025-09-19 11:26:18.265976 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-19 11:26:18.266143 | orchestrator | ok: [testbed-manager] 2025-09-19 11:26:18.266172 | orchestrator | 2025-09-19 11:26:18.266192 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-19 11:26:18.266212 | orchestrator | Friday 19 September 2025 11:25:34 +0000 (0:00:26.663) 0:00:36.285 ****** 2025-09-19 11:26:18.266227 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.266239 | orchestrator | 2025-09-19 11:26:18.266251 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:26:18.266263 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:26:18.266277 | orchestrator | 2025-09-19 11:26:18.266289 | orchestrator | 2025-09-19 11:26:18.266300 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:26:18.266313 | orchestrator | Friday 19 September 2025 11:25:37 +0000 (0:00:03.525) 0:00:39.811 ****** 2025-09-19 11:26:18.266325 | orchestrator | =============================================================================== 2025-09-19 11:26:18.266338 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.67s 2025-09-19 11:26:18.266349 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.52s 2025-09-19 11:26:18.266361 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.56s 2025-09-19 11:26:18.266373 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.53s 2025-09-19 11:26:18.266385 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.72s 2025-09-19 11:26:18.266396 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.66s 2025-09-19 11:26:18.266408 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.47s 2025-09-19 11:26:18.266420 | orchestrator | 2025-09-19 11:26:18.266432 | orchestrator | 2025-09-19 11:26:18.266444 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-19 11:26:18.266456 | orchestrator | 2025-09-19 11:26:18.266468 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-19 11:26:18.266480 | orchestrator | Friday 19 September 2025 11:24:58 +0000 (0:00:00.704) 0:00:00.704 ****** 2025-09-19 11:26:18.266491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-19 11:26:18.266504 | orchestrator | 2025-09-19 11:26:18.266515 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-19 11:26:18.266526 | orchestrator | Friday 19 September 2025 11:24:59 +0000 (0:00:00.742) 0:00:01.446 ****** 2025-09-19 11:26:18.266536 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-19 11:26:18.266547 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-19 11:26:18.266557 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-19 11:26:18.266568 | orchestrator | 2025-09-19 11:26:18.266578 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-19 11:26:18.266589 | orchestrator | Friday 19 September 2025 11:25:01 +0000 (0:00:02.048) 0:00:03.495 ****** 2025-09-19 11:26:18.266599 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.266610 | orchestrator | 2025-09-19 11:26:18.266621 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-19 11:26:18.266631 | orchestrator | Friday 19 September 2025 11:25:04 +0000 (0:00:02.883) 0:00:06.379 ****** 2025-09-19 11:26:18.266677 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-19 11:26:18.266689 | orchestrator | ok: [testbed-manager] 2025-09-19 11:26:18.266700 | orchestrator | 2025-09-19 11:26:18.266711 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-19 11:26:18.266722 | orchestrator | Friday 19 September 2025 11:25:38 +0000 (0:00:34.530) 0:00:40.909 ****** 2025-09-19 11:26:18.266732 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.266755 | orchestrator | 2025-09-19 11:26:18.266766 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-19 11:26:18.266777 | orchestrator | Friday 19 September 2025 11:25:39 +0000 (0:00:01.194) 0:00:42.104 ****** 2025-09-19 11:26:18.266787 | orchestrator | ok: [testbed-manager] 2025-09-19 11:26:18.266798 | orchestrator | 2025-09-19 11:26:18.266809 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-19 11:26:18.266819 | orchestrator | Friday 19 September 2025 11:25:41 +0000 (0:00:01.512) 0:00:43.617 ****** 2025-09-19 11:26:18.266830 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.266840 | orchestrator | 2025-09-19 11:26:18.266851 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-19 11:26:18.266862 | orchestrator | Friday 19 September 2025 11:25:43 +0000 (0:00:02.503) 0:00:46.121 ****** 2025-09-19 11:26:18.266878 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.266889 | orchestrator | 2025-09-19 11:26:18.266900 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-19 11:26:18.266911 | orchestrator | Friday 19 September 2025 11:25:44 +0000 (0:00:00.930) 0:00:47.052 ****** 2025-09-19 11:26:18.266922 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.266933 | orchestrator | 2025-09-19 11:26:18.266943 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-19 11:26:18.266954 | orchestrator | Friday 19 September 2025 11:25:45 +0000 (0:00:00.712) 0:00:47.764 ****** 2025-09-19 11:26:18.266965 | orchestrator | ok: [testbed-manager] 2025-09-19 11:26:18.266975 | orchestrator | 2025-09-19 11:26:18.266986 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:26:18.266997 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:26:18.267008 | orchestrator | 2025-09-19 11:26:18.267018 | orchestrator | 2025-09-19 11:26:18.267029 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:26:18.267040 | orchestrator | Friday 19 September 2025 11:25:46 +0000 (0:00:00.699) 0:00:48.464 ****** 2025-09-19 11:26:18.267050 | orchestrator | =============================================================================== 2025-09-19 11:26:18.267061 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.53s 2025-09-19 11:26:18.267071 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.88s 2025-09-19 11:26:18.267082 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.50s 2025-09-19 11:26:18.267092 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.05s 2025-09-19 11:26:18.267103 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.51s 2025-09-19 11:26:18.267114 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.20s 2025-09-19 11:26:18.267125 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.93s 2025-09-19 11:26:18.267135 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.74s 2025-09-19 11:26:18.267146 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.71s 2025-09-19 11:26:18.267156 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.70s 2025-09-19 11:26:18.267167 | orchestrator | 2025-09-19 11:26:18.267178 | orchestrator | 2025-09-19 11:26:18.267188 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-19 11:26:18.267199 | orchestrator | 2025-09-19 11:26:18.267209 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-19 11:26:18.267220 | orchestrator | Friday 19 September 2025 11:25:18 +0000 (0:00:00.251) 0:00:00.251 ****** 2025-09-19 11:26:18.267231 | orchestrator | ok: [testbed-manager] 2025-09-19 11:26:18.267241 | orchestrator | 2025-09-19 11:26:18.267252 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-19 11:26:18.267262 | orchestrator | Friday 19 September 2025 11:25:19 +0000 (0:00:01.176) 0:00:01.427 ****** 2025-09-19 11:26:18.267280 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-19 11:26:18.267291 | orchestrator | 2025-09-19 11:26:18.267301 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-19 11:26:18.267312 | orchestrator | Friday 19 September 2025 11:25:20 +0000 (0:00:00.794) 0:00:02.222 ****** 2025-09-19 11:26:18.267323 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.267333 | orchestrator | 2025-09-19 11:26:18.267344 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-19 11:26:18.267355 | orchestrator | Friday 19 September 2025 11:25:21 +0000 (0:00:01.342) 0:00:03.564 ****** 2025-09-19 11:26:18.267365 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-19 11:26:18.267376 | orchestrator | ok: [testbed-manager] 2025-09-19 11:26:18.267386 | orchestrator | 2025-09-19 11:26:18.267397 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-19 11:26:18.267408 | orchestrator | Friday 19 September 2025 11:26:10 +0000 (0:00:49.320) 0:00:52.885 ****** 2025-09-19 11:26:18.267418 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.267429 | orchestrator | 2025-09-19 11:26:18.267440 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:26:18.267450 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:26:18.267461 | orchestrator | 2025-09-19 11:26:18.267472 | orchestrator | 2025-09-19 11:26:18.267482 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:26:18.267499 | orchestrator | Friday 19 September 2025 11:26:16 +0000 (0:00:06.031) 0:00:58.916 ****** 2025-09-19 11:26:18.267510 | orchestrator | =============================================================================== 2025-09-19 11:26:18.267521 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 49.32s 2025-09-19 11:26:18.267531 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 6.03s 2025-09-19 11:26:18.267542 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.34s 2025-09-19 11:26:18.267553 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.18s 2025-09-19 11:26:18.267563 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.79s 2025-09-19 11:26:18.269263 | orchestrator | 2025-09-19 11:26:18 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:18.269888 | orchestrator | 2025-09-19 11:26:18 | INFO  | Task 884459f8-9610-4445-b2db-1e359c88e2f1 is in state SUCCESS 2025-09-19 11:26:18.270921 | orchestrator | 2025-09-19 11:26:18.270952 | orchestrator | 2025-09-19 11:26:18.270966 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:26:18.270976 | orchestrator | 2025-09-19 11:26:18.270985 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:26:18.270993 | orchestrator | Friday 19 September 2025 11:24:59 +0000 (0:00:00.194) 0:00:00.194 ****** 2025-09-19 11:26:18.271002 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-19 11:26:18.271011 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-19 11:26:18.271019 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-19 11:26:18.271028 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-19 11:26:18.271037 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-19 11:26:18.271045 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-19 11:26:18.271053 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-19 11:26:18.271062 | orchestrator | 2025-09-19 11:26:18.271071 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-19 11:26:18.271079 | orchestrator | 2025-09-19 11:26:18.271088 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-19 11:26:18.271108 | orchestrator | Friday 19 September 2025 11:25:01 +0000 (0:00:01.773) 0:00:01.968 ****** 2025-09-19 11:26:18.271128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:26:18.271144 | orchestrator | 2025-09-19 11:26:18.271153 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-19 11:26:18.271161 | orchestrator | Friday 19 September 2025 11:25:02 +0000 (0:00:01.189) 0:00:03.157 ****** 2025-09-19 11:26:18.271170 | orchestrator | ok: [testbed-manager] 2025-09-19 11:26:18.271179 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:26:18.271187 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:26:18.271196 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:26:18.271204 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:26:18.271213 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:26:18.271221 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:26:18.271230 | orchestrator | 2025-09-19 11:26:18.271238 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-19 11:26:18.271247 | orchestrator | Friday 19 September 2025 11:25:04 +0000 (0:00:02.233) 0:00:05.391 ****** 2025-09-19 11:26:18.271256 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:26:18.271264 | orchestrator | ok: [testbed-manager] 2025-09-19 11:26:18.271273 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:26:18.271282 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:26:18.271290 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:26:18.271299 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:26:18.271307 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:26:18.271316 | orchestrator | 2025-09-19 11:26:18.271324 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-19 11:26:18.271333 | orchestrator | Friday 19 September 2025 11:25:08 +0000 (0:00:03.801) 0:00:09.193 ****** 2025-09-19 11:26:18.271342 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:26:18.271350 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:26:18.271359 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:26:18.271367 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:26:18.271376 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:26:18.271385 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:26:18.271393 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.271402 | orchestrator | 2025-09-19 11:26:18.271410 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-19 11:26:18.271419 | orchestrator | Friday 19 September 2025 11:25:11 +0000 (0:00:02.585) 0:00:11.778 ****** 2025-09-19 11:26:18.271428 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:26:18.271436 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:26:18.271444 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:26:18.271453 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:26:18.271461 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:26:18.271470 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:26:18.271478 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.271487 | orchestrator | 2025-09-19 11:26:18.271495 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-19 11:26:18.271504 | orchestrator | Friday 19 September 2025 11:25:21 +0000 (0:00:09.731) 0:00:21.510 ****** 2025-09-19 11:26:18.271513 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:26:18.271521 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:26:18.271532 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:26:18.271541 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:26:18.271550 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:26:18.271560 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:26:18.271569 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.271580 | orchestrator | 2025-09-19 11:26:18.271589 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-19 11:26:18.271599 | orchestrator | Friday 19 September 2025 11:25:54 +0000 (0:00:33.712) 0:00:55.223 ****** 2025-09-19 11:26:18.271614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:26:18.271626 | orchestrator | 2025-09-19 11:26:18.271636 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-19 11:26:18.271678 | orchestrator | Friday 19 September 2025 11:25:56 +0000 (0:00:01.566) 0:00:56.789 ****** 2025-09-19 11:26:18.271688 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-19 11:26:18.271698 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-19 11:26:18.271707 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-19 11:26:18.271717 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-19 11:26:18.271734 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-19 11:26:18.271745 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-19 11:26:18.271755 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-19 11:26:18.271764 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-19 11:26:18.271774 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-19 11:26:18.271784 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-19 11:26:18.271793 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-19 11:26:18.271803 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-19 11:26:18.271813 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-19 11:26:18.271823 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-19 11:26:18.271832 | orchestrator | 2025-09-19 11:26:18.271842 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-19 11:26:18.271852 | orchestrator | Friday 19 September 2025 11:26:02 +0000 (0:00:06.181) 0:01:02.971 ****** 2025-09-19 11:26:18.271862 | orchestrator | ok: [testbed-manager] 2025-09-19 11:26:18.271871 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:26:18.271881 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:26:18.271891 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:26:18.271901 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:26:18.271909 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:26:18.271918 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:26:18.271926 | orchestrator | 2025-09-19 11:26:18.271935 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-19 11:26:18.271944 | orchestrator | Friday 19 September 2025 11:26:03 +0000 (0:00:01.205) 0:01:04.176 ****** 2025-09-19 11:26:18.271952 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.271961 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:26:18.271970 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:26:18.271978 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:26:18.271987 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:26:18.271996 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:26:18.272004 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:26:18.272013 | orchestrator | 2025-09-19 11:26:18.272022 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-19 11:26:18.272030 | orchestrator | Friday 19 September 2025 11:26:05 +0000 (0:00:01.479) 0:01:05.656 ****** 2025-09-19 11:26:18.272039 | orchestrator | ok: [testbed-manager] 2025-09-19 11:26:18.272048 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:26:18.272079 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:26:18.272088 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:26:18.272097 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:26:18.272105 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:26:18.272114 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:26:18.272122 | orchestrator | 2025-09-19 11:26:18.272131 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-19 11:26:18.272140 | orchestrator | Friday 19 September 2025 11:26:06 +0000 (0:00:01.168) 0:01:06.824 ****** 2025-09-19 11:26:18.272155 | orchestrator | ok: [testbed-manager] 2025-09-19 11:26:18.272164 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:26:18.272172 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:26:18.272181 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:26:18.272189 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:26:18.272198 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:26:18.272206 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:26:18.272215 | orchestrator | 2025-09-19 11:26:18.272224 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-19 11:26:18.272233 | orchestrator | Friday 19 September 2025 11:26:08 +0000 (0:00:01.645) 0:01:08.470 ****** 2025-09-19 11:26:18.272241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-19 11:26:18.272252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:26:18.272261 | orchestrator | 2025-09-19 11:26:18.272270 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-19 11:26:18.272278 | orchestrator | Friday 19 September 2025 11:26:09 +0000 (0:00:01.435) 0:01:09.905 ****** 2025-09-19 11:26:18.272287 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.272295 | orchestrator | 2025-09-19 11:26:18.272304 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-19 11:26:18.272313 | orchestrator | Friday 19 September 2025 11:26:11 +0000 (0:00:02.430) 0:01:12.336 ****** 2025-09-19 11:26:18.272321 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:26:18.272330 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:26:18.272339 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:26:18.272347 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:26:18.272356 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:26:18.272364 | orchestrator | changed: [testbed-manager] 2025-09-19 11:26:18.272373 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:26:18.272381 | orchestrator | 2025-09-19 11:26:18.272390 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:26:18.272399 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:26:18.272408 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:26:18.272417 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:26:18.272426 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:26:18.272439 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:26:18.272451 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:26:18.272461 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:26:18.272469 | orchestrator | 2025-09-19 11:26:18.272478 | orchestrator | 2025-09-19 11:26:18.272487 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:26:18.272496 | orchestrator | Friday 19 September 2025 11:26:14 +0000 (0:00:03.017) 0:01:15.353 ****** 2025-09-19 11:26:18.272504 | orchestrator | =============================================================================== 2025-09-19 11:26:18.272513 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 33.71s 2025-09-19 11:26:18.272530 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.73s 2025-09-19 11:26:18.272539 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.18s 2025-09-19 11:26:18.272548 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.80s 2025-09-19 11:26:18.272556 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.02s 2025-09-19 11:26:18.272565 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.59s 2025-09-19 11:26:18.272574 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.43s 2025-09-19 11:26:18.272582 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.23s 2025-09-19 11:26:18.272591 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.77s 2025-09-19 11:26:18.272599 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.65s 2025-09-19 11:26:18.272608 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.57s 2025-09-19 11:26:18.272616 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.48s 2025-09-19 11:26:18.272625 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.44s 2025-09-19 11:26:18.272634 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.21s 2025-09-19 11:26:18.272655 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.19s 2025-09-19 11:26:18.272664 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.17s 2025-09-19 11:26:18.273071 | orchestrator | 2025-09-19 11:26:18 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:18.273099 | orchestrator | 2025-09-19 11:26:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:21.303612 | orchestrator | 2025-09-19 11:26:21 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:21.304469 | orchestrator | 2025-09-19 11:26:21 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:21.305736 | orchestrator | 2025-09-19 11:26:21 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:21.305760 | orchestrator | 2025-09-19 11:26:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:24.345595 | orchestrator | 2025-09-19 11:26:24 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:24.347426 | orchestrator | 2025-09-19 11:26:24 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:24.349752 | orchestrator | 2025-09-19 11:26:24 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:24.349842 | orchestrator | 2025-09-19 11:26:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:27.403774 | orchestrator | 2025-09-19 11:26:27 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:27.403877 | orchestrator | 2025-09-19 11:26:27 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:27.404990 | orchestrator | 2025-09-19 11:26:27 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:27.405049 | orchestrator | 2025-09-19 11:26:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:30.453999 | orchestrator | 2025-09-19 11:26:30 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:30.455358 | orchestrator | 2025-09-19 11:26:30 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:30.456335 | orchestrator | 2025-09-19 11:26:30 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:30.456397 | orchestrator | 2025-09-19 11:26:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:33.527220 | orchestrator | 2025-09-19 11:26:33 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:33.529197 | orchestrator | 2025-09-19 11:26:33 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:33.529292 | orchestrator | 2025-09-19 11:26:33 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:33.529316 | orchestrator | 2025-09-19 11:26:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:36.600123 | orchestrator | 2025-09-19 11:26:36 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:36.601802 | orchestrator | 2025-09-19 11:26:36 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:36.602514 | orchestrator | 2025-09-19 11:26:36 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:36.602551 | orchestrator | 2025-09-19 11:26:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:39.666156 | orchestrator | 2025-09-19 11:26:39 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:39.667455 | orchestrator | 2025-09-19 11:26:39 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:39.669251 | orchestrator | 2025-09-19 11:26:39 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:39.669727 | orchestrator | 2025-09-19 11:26:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:42.722285 | orchestrator | 2025-09-19 11:26:42 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:42.722928 | orchestrator | 2025-09-19 11:26:42 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:42.724230 | orchestrator | 2025-09-19 11:26:42 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:42.724251 | orchestrator | 2025-09-19 11:26:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:45.768703 | orchestrator | 2025-09-19 11:26:45 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:45.769983 | orchestrator | 2025-09-19 11:26:45 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:45.772024 | orchestrator | 2025-09-19 11:26:45 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:45.772061 | orchestrator | 2025-09-19 11:26:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:48.815199 | orchestrator | 2025-09-19 11:26:48 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:48.818377 | orchestrator | 2025-09-19 11:26:48 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:48.819957 | orchestrator | 2025-09-19 11:26:48 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:48.819990 | orchestrator | 2025-09-19 11:26:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:51.862190 | orchestrator | 2025-09-19 11:26:51 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:51.862321 | orchestrator | 2025-09-19 11:26:51 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:51.863472 | orchestrator | 2025-09-19 11:26:51 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:51.863513 | orchestrator | 2025-09-19 11:26:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:54.910529 | orchestrator | 2025-09-19 11:26:54 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:54.910724 | orchestrator | 2025-09-19 11:26:54 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:54.911595 | orchestrator | 2025-09-19 11:26:54 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:54.911669 | orchestrator | 2025-09-19 11:26:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:26:57.956732 | orchestrator | 2025-09-19 11:26:57 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:26:57.956830 | orchestrator | 2025-09-19 11:26:57 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:26:57.958406 | orchestrator | 2025-09-19 11:26:57 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:26:57.958432 | orchestrator | 2025-09-19 11:26:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:00.993278 | orchestrator | 2025-09-19 11:27:00 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:27:00.999526 | orchestrator | 2025-09-19 11:27:00 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:01.000726 | orchestrator | 2025-09-19 11:27:01 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:01.000765 | orchestrator | 2025-09-19 11:27:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:04.041740 | orchestrator | 2025-09-19 11:27:04 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:27:04.043069 | orchestrator | 2025-09-19 11:27:04 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:04.045835 | orchestrator | 2025-09-19 11:27:04 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:04.045893 | orchestrator | 2025-09-19 11:27:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:07.076095 | orchestrator | 2025-09-19 11:27:07 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:27:07.076271 | orchestrator | 2025-09-19 11:27:07 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:07.077324 | orchestrator | 2025-09-19 11:27:07 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:07.077377 | orchestrator | 2025-09-19 11:27:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:10.118282 | orchestrator | 2025-09-19 11:27:10 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:27:10.120510 | orchestrator | 2025-09-19 11:27:10 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:10.123394 | orchestrator | 2025-09-19 11:27:10 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:10.123458 | orchestrator | 2025-09-19 11:27:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:13.157822 | orchestrator | 2025-09-19 11:27:13 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:27:13.159129 | orchestrator | 2025-09-19 11:27:13 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:13.160538 | orchestrator | 2025-09-19 11:27:13 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:13.160598 | orchestrator | 2025-09-19 11:27:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:16.199907 | orchestrator | 2025-09-19 11:27:16 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state STARTED 2025-09-19 11:27:16.200329 | orchestrator | 2025-09-19 11:27:16 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:16.202396 | orchestrator | 2025-09-19 11:27:16 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:16.202471 | orchestrator | 2025-09-19 11:27:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:19.247388 | orchestrator | 2025-09-19 11:27:19 | INFO  | Task f049fe22-0118-49d9-838a-fee4b6fbe49d is in state SUCCESS 2025-09-19 11:27:19.249838 | orchestrator | 2025-09-19 11:27:19.249900 | orchestrator | 2025-09-19 11:27:19.249913 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-19 11:27:19.249925 | orchestrator | 2025-09-19 11:27:19.249936 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-19 11:27:19.249949 | orchestrator | Friday 19 September 2025 11:24:51 +0000 (0:00:00.349) 0:00:00.349 ****** 2025-09-19 11:27:19.249961 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:27:19.249983 | orchestrator | 2025-09-19 11:27:19.249998 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-19 11:27:19.250105 | orchestrator | Friday 19 September 2025 11:24:52 +0000 (0:00:01.410) 0:00:01.759 ****** 2025-09-19 11:27:19.250118 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:27:19.250129 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:27:19.250147 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:27:19.250166 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:27:19.250179 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:27:19.250191 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:27:19.250202 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:27:19.250212 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:27:19.250223 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:27:19.250236 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:27:19.250255 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:27:19.250266 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:27:19.250277 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:27:19.250288 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:27:19.250299 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 11:27:19.250310 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:27:19.250321 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:27:19.250332 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:27:19.250343 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 11:27:19.250353 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:27:19.250364 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 11:27:19.250375 | orchestrator | 2025-09-19 11:27:19.250386 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-19 11:27:19.250418 | orchestrator | Friday 19 September 2025 11:24:57 +0000 (0:00:04.691) 0:00:06.451 ****** 2025-09-19 11:27:19.250431 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:27:19.250445 | orchestrator | 2025-09-19 11:27:19.250457 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-19 11:27:19.250470 | orchestrator | Friday 19 September 2025 11:24:58 +0000 (0:00:01.261) 0:00:07.712 ****** 2025-09-19 11:27:19.250489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.250507 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.250537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.250549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.250592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.250606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250626 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.250650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250681 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250732 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250751 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.250763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250811 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250823 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250834 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.250872 | orchestrator | 2025-09-19 11:27:19.250883 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-19 11:27:19.250894 | orchestrator | Friday 19 September 2025 11:25:05 +0000 (0:00:06.730) 0:00:14.443 ****** 2025-09-19 11:27:19.250906 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.250918 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.250929 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.250941 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:27:19.250960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.250973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.250984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.251022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.251056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251089 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:27:19.251100 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:27:19.251111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.251127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251156 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:27:19.251167 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:19.251178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.251189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251212 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:19.251235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.251247 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251277 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:27:19.251288 | orchestrator | 2025-09-19 11:27:19.251298 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-19 11:27:19.251309 | orchestrator | Friday 19 September 2025 11:25:08 +0000 (0:00:02.939) 0:00:17.382 ****** 2025-09-19 11:27:19.251325 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.251336 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251348 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.251376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251400 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:27:19.251417 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:27:19.251428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.251444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.251478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251500 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:27:19.251511 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:27:19.251532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.251551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.251621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251643 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:19.251654 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:19.251665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 11:27:19.251682 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.251726 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:27:19.251751 | orchestrator | 2025-09-19 11:27:19.251776 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-19 11:27:19.251792 | orchestrator | Friday 19 September 2025 11:25:14 +0000 (0:00:05.693) 0:00:23.075 ****** 2025-09-19 11:27:19.251809 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:27:19.251826 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:27:19.251845 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:27:19.251863 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:27:19.251880 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:19.251898 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:19.251913 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:27:19.251924 | orchestrator | 2025-09-19 11:27:19.251935 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-19 11:27:19.251954 | orchestrator | Friday 19 September 2025 11:25:15 +0000 (0:00:01.197) 0:00:24.273 ****** 2025-09-19 11:27:19.251977 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:27:19.252003 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:27:19.252019 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:27:19.252036 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:27:19.252053 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:27:19.252069 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:27:19.252085 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:27:19.252100 | orchestrator | 2025-09-19 11:27:19.252126 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-19 11:27:19.252144 | orchestrator | Friday 19 September 2025 11:25:16 +0000 (0:00:01.521) 0:00:25.794 ****** 2025-09-19 11:27:19.252164 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.252184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.252204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.252218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.252249 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.252262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.252273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252296 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.252307 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252349 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252362 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252389 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252411 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252440 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.252480 | orchestrator | 2025-09-19 11:27:19.252491 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-19 11:27:19.252501 | orchestrator | Friday 19 September 2025 11:25:22 +0000 (0:00:05.078) 0:00:30.872 ****** 2025-09-19 11:27:19.252512 | orchestrator | [WARNING]: Skipped 2025-09-19 11:27:19.252524 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-19 11:27:19.252535 | orchestrator | to this access issue: 2025-09-19 11:27:19.252546 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-19 11:27:19.252557 | orchestrator | directory 2025-09-19 11:27:19.252644 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:27:19.252664 | orchestrator | 2025-09-19 11:27:19.252682 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-19 11:27:19.252694 | orchestrator | Friday 19 September 2025 11:25:23 +0000 (0:00:01.155) 0:00:32.028 ****** 2025-09-19 11:27:19.252704 | orchestrator | [WARNING]: Skipped 2025-09-19 11:27:19.252715 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-19 11:27:19.252726 | orchestrator | to this access issue: 2025-09-19 11:27:19.252737 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-19 11:27:19.252747 | orchestrator | directory 2025-09-19 11:27:19.252758 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:27:19.252769 | orchestrator | 2025-09-19 11:27:19.252785 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-19 11:27:19.252796 | orchestrator | Friday 19 September 2025 11:25:23 +0000 (0:00:00.738) 0:00:32.767 ****** 2025-09-19 11:27:19.252807 | orchestrator | [WARNING]: Skipped 2025-09-19 11:27:19.252818 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-19 11:27:19.252828 | orchestrator | to this access issue: 2025-09-19 11:27:19.252839 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-19 11:27:19.252850 | orchestrator | directory 2025-09-19 11:27:19.252860 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:27:19.252871 | orchestrator | 2025-09-19 11:27:19.252892 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-19 11:27:19.252903 | orchestrator | Friday 19 September 2025 11:25:24 +0000 (0:00:00.825) 0:00:33.592 ****** 2025-09-19 11:27:19.252913 | orchestrator | [WARNING]: Skipped 2025-09-19 11:27:19.252924 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-19 11:27:19.252945 | orchestrator | to this access issue: 2025-09-19 11:27:19.252955 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-19 11:27:19.252966 | orchestrator | directory 2025-09-19 11:27:19.252976 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:27:19.252987 | orchestrator | 2025-09-19 11:27:19.252998 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-19 11:27:19.253010 | orchestrator | Friday 19 September 2025 11:25:25 +0000 (0:00:00.803) 0:00:34.396 ****** 2025-09-19 11:27:19.253029 | orchestrator | changed: [testbed-manager] 2025-09-19 11:27:19.253039 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:27:19.253048 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:27:19.253058 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:27:19.253067 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:27:19.253076 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:27:19.253085 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:27:19.253095 | orchestrator | 2025-09-19 11:27:19.253104 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-19 11:27:19.253114 | orchestrator | Friday 19 September 2025 11:25:29 +0000 (0:00:03.689) 0:00:38.086 ****** 2025-09-19 11:27:19.253123 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:27:19.253133 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:27:19.253142 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:27:19.253151 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:27:19.253161 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:27:19.253170 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:27:19.253185 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 11:27:19.253195 | orchestrator | 2025-09-19 11:27:19.253205 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-19 11:27:19.253214 | orchestrator | Friday 19 September 2025 11:25:32 +0000 (0:00:02.945) 0:00:41.032 ****** 2025-09-19 11:27:19.253223 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:27:19.253233 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:27:19.253242 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:27:19.253251 | orchestrator | changed: [testbed-manager] 2025-09-19 11:27:19.253267 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:27:19.253277 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:27:19.253286 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:27:19.253296 | orchestrator | 2025-09-19 11:27:19.253305 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-19 11:27:19.253315 | orchestrator | Friday 19 September 2025 11:25:35 +0000 (0:00:03.562) 0:00:44.594 ****** 2025-09-19 11:27:19.253325 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.253344 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.253358 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.253369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.253379 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.253389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.253411 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.253433 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.253449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.253463 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.253474 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.253484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.253493 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.253503 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.253519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.253535 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.253546 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.253616 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.253630 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.253640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:27:19.253651 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.253660 | orchestrator | 2025-09-19 11:27:19.253670 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-19 11:27:19.253680 | orchestrator | Friday 19 September 2025 11:25:38 +0000 (0:00:03.159) 0:00:47.753 ****** 2025-09-19 11:27:19.253689 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:27:19.253699 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:27:19.253708 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:27:19.253717 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:27:19.253727 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:27:19.253736 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:27:19.253746 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 11:27:19.253761 | orchestrator | 2025-09-19 11:27:19.254011 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-19 11:27:19.254075 | orchestrator | Friday 19 September 2025 11:25:41 +0000 (0:00:03.007) 0:00:50.761 ****** 2025-09-19 11:27:19.254084 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:27:19.254092 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:27:19.254100 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:27:19.254108 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:27:19.254116 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:27:19.254123 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:27:19.254131 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 11:27:19.254139 | orchestrator | 2025-09-19 11:27:19.254147 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-19 11:27:19.254154 | orchestrator | Friday 19 September 2025 11:25:44 +0000 (0:00:02.996) 0:00:53.757 ****** 2025-09-19 11:27:19.254163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.254177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.254186 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.254194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.254202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.254234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254242 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254254 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.254262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254280 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 11:27:19.254309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254317 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254326 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:27:19.254382 | orchestrator | 2025-09-19 11:27:19.254390 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-19 11:27:19.254398 | orchestrator | Friday 19 September 2025 11:25:49 +0000 (0:00:04.569) 0:00:58.327 ****** 2025-09-19 11:27:19.254409 | orchestrator | changed: [testbed-manager] 2025-09-19 11:27:19.254418 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:27:19.254426 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:27:19.254433 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:27:19.254441 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:27:19.254449 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:27:19.254456 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:27:19.254464 | orchestrator | 2025-09-19 11:27:19.254472 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-19 11:27:19.254480 | orchestrator | Friday 19 September 2025 11:25:51 +0000 (0:00:01.942) 0:01:00.269 ****** 2025-09-19 11:27:19.254487 | orchestrator | changed: [testbed-manager] 2025-09-19 11:27:19.254495 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:27:19.254503 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:27:19.254510 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:27:19.254518 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:27:19.254526 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:27:19.254533 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:27:19.254541 | orchestrator | 2025-09-19 11:27:19.254554 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:27:19.254596 | orchestrator | Friday 19 September 2025 11:25:52 +0000 (0:00:01.429) 0:01:01.699 ****** 2025-09-19 11:27:19.254611 | orchestrator | 2025-09-19 11:27:19.254624 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:27:19.254636 | orchestrator | Friday 19 September 2025 11:25:52 +0000 (0:00:00.074) 0:01:01.773 ****** 2025-09-19 11:27:19.254648 | orchestrator | 2025-09-19 11:27:19.254661 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:27:19.254674 | orchestrator | Friday 19 September 2025 11:25:52 +0000 (0:00:00.079) 0:01:01.853 ****** 2025-09-19 11:27:19.254685 | orchestrator | 2025-09-19 11:27:19.254695 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:27:19.254714 | orchestrator | Friday 19 September 2025 11:25:53 +0000 (0:00:00.356) 0:01:02.210 ****** 2025-09-19 11:27:19.254726 | orchestrator | 2025-09-19 11:27:19.254738 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:27:19.254754 | orchestrator | Friday 19 September 2025 11:25:53 +0000 (0:00:00.091) 0:01:02.301 ****** 2025-09-19 11:27:19.254767 | orchestrator | 2025-09-19 11:27:19.254779 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:27:19.254790 | orchestrator | Friday 19 September 2025 11:25:53 +0000 (0:00:00.066) 0:01:02.368 ****** 2025-09-19 11:27:19.254802 | orchestrator | 2025-09-19 11:27:19.254814 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 11:27:19.254826 | orchestrator | Friday 19 September 2025 11:25:53 +0000 (0:00:00.066) 0:01:02.435 ****** 2025-09-19 11:27:19.254850 | orchestrator | 2025-09-19 11:27:19.254863 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-19 11:27:19.254875 | orchestrator | Friday 19 September 2025 11:25:53 +0000 (0:00:00.086) 0:01:02.522 ****** 2025-09-19 11:27:19.254887 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:27:19.254901 | orchestrator | changed: [testbed-manager] 2025-09-19 11:27:19.254913 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:27:19.254926 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:27:19.254938 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:27:19.254951 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:27:19.254965 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:27:19.254978 | orchestrator | 2025-09-19 11:27:19.254992 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-19 11:27:19.255004 | orchestrator | Friday 19 September 2025 11:26:32 +0000 (0:00:39.013) 0:01:41.535 ****** 2025-09-19 11:27:19.255018 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:27:19.255033 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:27:19.255046 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:27:19.255058 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:27:19.255071 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:27:19.255084 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:27:19.255097 | orchestrator | changed: [testbed-manager] 2025-09-19 11:27:19.255110 | orchestrator | 2025-09-19 11:27:19.255122 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-19 11:27:19.255134 | orchestrator | Friday 19 September 2025 11:27:06 +0000 (0:00:33.489) 0:02:15.024 ****** 2025-09-19 11:27:19.255147 | orchestrator | ok: [testbed-manager] 2025-09-19 11:27:19.255161 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:27:19.255173 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:27:19.255184 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:27:19.255196 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:27:19.255208 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:27:19.255220 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:27:19.255233 | orchestrator | 2025-09-19 11:27:19.255246 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-19 11:27:19.255259 | orchestrator | Friday 19 September 2025 11:27:08 +0000 (0:00:02.064) 0:02:17.089 ****** 2025-09-19 11:27:19.255272 | orchestrator | changed: [testbed-manager] 2025-09-19 11:27:19.255285 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:27:19.255298 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:27:19.255313 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:27:19.255326 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:27:19.255339 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:27:19.255352 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:27:19.255366 | orchestrator | 2025-09-19 11:27:19.255380 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:27:19.255395 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:27:19.255410 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:27:19.255423 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:27:19.255446 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:27:19.255460 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:27:19.255472 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:27:19.255497 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 11:27:19.255509 | orchestrator | 2025-09-19 11:27:19.255522 | orchestrator | 2025-09-19 11:27:19.255536 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:27:19.255548 | orchestrator | Friday 19 September 2025 11:27:18 +0000 (0:00:09.889) 0:02:26.979 ****** 2025-09-19 11:27:19.255560 | orchestrator | =============================================================================== 2025-09-19 11:27:19.255602 | orchestrator | common : Restart fluentd container ------------------------------------- 39.01s 2025-09-19 11:27:19.255616 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 33.49s 2025-09-19 11:27:19.255630 | orchestrator | common : Restart cron container ----------------------------------------- 9.89s 2025-09-19 11:27:19.255643 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.73s 2025-09-19 11:27:19.255657 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 5.69s 2025-09-19 11:27:19.255671 | orchestrator | common : Copying over config.json files for services -------------------- 5.08s 2025-09-19 11:27:19.255684 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.69s 2025-09-19 11:27:19.255706 | orchestrator | common : Check common containers ---------------------------------------- 4.57s 2025-09-19 11:27:19.255719 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.69s 2025-09-19 11:27:19.255732 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.56s 2025-09-19 11:27:19.255745 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.16s 2025-09-19 11:27:19.255758 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.01s 2025-09-19 11:27:19.255771 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.00s 2025-09-19 11:27:19.255783 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.95s 2025-09-19 11:27:19.255795 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.94s 2025-09-19 11:27:19.255807 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.06s 2025-09-19 11:27:19.255819 | orchestrator | common : Creating log volume -------------------------------------------- 1.94s 2025-09-19 11:27:19.255832 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.52s 2025-09-19 11:27:19.255844 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.43s 2025-09-19 11:27:19.255857 | orchestrator | common : include_tasks -------------------------------------------------- 1.41s 2025-09-19 11:27:19.255870 | orchestrator | 2025-09-19 11:27:19 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:19.255882 | orchestrator | 2025-09-19 11:27:19 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:19.255895 | orchestrator | 2025-09-19 11:27:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:22.314491 | orchestrator | 2025-09-19 11:27:22 | INFO  | Task d3d5089d-4f4d-432a-88ca-8eddc5c328e0 is in state STARTED 2025-09-19 11:27:22.314905 | orchestrator | 2025-09-19 11:27:22 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:22.315711 | orchestrator | 2025-09-19 11:27:22 | INFO  | Task 668ed6d6-21a0-444c-89db-9f1f000861c3 is in state STARTED 2025-09-19 11:27:22.318901 | orchestrator | 2025-09-19 11:27:22 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:22.319752 | orchestrator | 2025-09-19 11:27:22 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:27:22.321466 | orchestrator | 2025-09-19 11:27:22 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:27:22.321654 | orchestrator | 2025-09-19 11:27:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:25.352939 | orchestrator | 2025-09-19 11:27:25 | INFO  | Task d3d5089d-4f4d-432a-88ca-8eddc5c328e0 is in state STARTED 2025-09-19 11:27:25.353339 | orchestrator | 2025-09-19 11:27:25 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:25.354155 | orchestrator | 2025-09-19 11:27:25 | INFO  | Task 668ed6d6-21a0-444c-89db-9f1f000861c3 is in state STARTED 2025-09-19 11:27:25.354835 | orchestrator | 2025-09-19 11:27:25 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:25.355709 | orchestrator | 2025-09-19 11:27:25 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:27:25.356851 | orchestrator | 2025-09-19 11:27:25 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:27:25.356877 | orchestrator | 2025-09-19 11:27:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:28.390773 | orchestrator | 2025-09-19 11:27:28 | INFO  | Task d3d5089d-4f4d-432a-88ca-8eddc5c328e0 is in state STARTED 2025-09-19 11:27:28.391249 | orchestrator | 2025-09-19 11:27:28 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:28.391974 | orchestrator | 2025-09-19 11:27:28 | INFO  | Task 668ed6d6-21a0-444c-89db-9f1f000861c3 is in state STARTED 2025-09-19 11:27:28.392841 | orchestrator | 2025-09-19 11:27:28 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:28.395418 | orchestrator | 2025-09-19 11:27:28 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:27:28.396351 | orchestrator | 2025-09-19 11:27:28 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:27:28.396401 | orchestrator | 2025-09-19 11:27:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:31.441782 | orchestrator | 2025-09-19 11:27:31 | INFO  | Task d3d5089d-4f4d-432a-88ca-8eddc5c328e0 is in state STARTED 2025-09-19 11:27:31.443631 | orchestrator | 2025-09-19 11:27:31 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:31.445345 | orchestrator | 2025-09-19 11:27:31 | INFO  | Task 668ed6d6-21a0-444c-89db-9f1f000861c3 is in state STARTED 2025-09-19 11:27:31.447422 | orchestrator | 2025-09-19 11:27:31 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:31.450328 | orchestrator | 2025-09-19 11:27:31 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:27:31.452677 | orchestrator | 2025-09-19 11:27:31 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:27:31.452716 | orchestrator | 2025-09-19 11:27:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:34.504827 | orchestrator | 2025-09-19 11:27:34 | INFO  | Task d3d5089d-4f4d-432a-88ca-8eddc5c328e0 is in state STARTED 2025-09-19 11:27:34.506687 | orchestrator | 2025-09-19 11:27:34 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:34.510925 | orchestrator | 2025-09-19 11:27:34 | INFO  | Task 668ed6d6-21a0-444c-89db-9f1f000861c3 is in state STARTED 2025-09-19 11:27:34.513669 | orchestrator | 2025-09-19 11:27:34 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:34.517109 | orchestrator | 2025-09-19 11:27:34 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:27:34.520757 | orchestrator | 2025-09-19 11:27:34 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:27:34.520890 | orchestrator | 2025-09-19 11:27:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:37.731709 | orchestrator | 2025-09-19 11:27:37 | INFO  | Task d3d5089d-4f4d-432a-88ca-8eddc5c328e0 is in state STARTED 2025-09-19 11:27:37.744597 | orchestrator | 2025-09-19 11:27:37 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:37.749222 | orchestrator | 2025-09-19 11:27:37 | INFO  | Task 668ed6d6-21a0-444c-89db-9f1f000861c3 is in state STARTED 2025-09-19 11:27:37.756520 | orchestrator | 2025-09-19 11:27:37 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:37.774819 | orchestrator | 2025-09-19 11:27:37 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:27:37.786702 | orchestrator | 2025-09-19 11:27:37 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:27:37.786754 | orchestrator | 2025-09-19 11:27:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:40.828210 | orchestrator | 2025-09-19 11:27:40 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:27:40.828296 | orchestrator | 2025-09-19 11:27:40 | INFO  | Task d3d5089d-4f4d-432a-88ca-8eddc5c328e0 is in state STARTED 2025-09-19 11:27:40.828909 | orchestrator | 2025-09-19 11:27:40 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:40.829384 | orchestrator | 2025-09-19 11:27:40 | INFO  | Task 668ed6d6-21a0-444c-89db-9f1f000861c3 is in state SUCCESS 2025-09-19 11:27:40.830102 | orchestrator | 2025-09-19 11:27:40 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:40.830739 | orchestrator | 2025-09-19 11:27:40 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:27:40.831405 | orchestrator | 2025-09-19 11:27:40 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:27:40.831427 | orchestrator | 2025-09-19 11:27:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:43.870384 | orchestrator | 2025-09-19 11:27:43 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:27:43.870617 | orchestrator | 2025-09-19 11:27:43 | INFO  | Task d3d5089d-4f4d-432a-88ca-8eddc5c328e0 is in state STARTED 2025-09-19 11:27:43.871281 | orchestrator | 2025-09-19 11:27:43 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:43.872082 | orchestrator | 2025-09-19 11:27:43 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:43.872453 | orchestrator | 2025-09-19 11:27:43 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:27:43.873313 | orchestrator | 2025-09-19 11:27:43 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:27:43.873335 | orchestrator | 2025-09-19 11:27:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:46.905690 | orchestrator | 2025-09-19 11:27:46 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:27:46.905920 | orchestrator | 2025-09-19 11:27:46 | INFO  | Task d3d5089d-4f4d-432a-88ca-8eddc5c328e0 is in state STARTED 2025-09-19 11:27:46.907266 | orchestrator | 2025-09-19 11:27:46 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:46.909316 | orchestrator | 2025-09-19 11:27:46 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:46.910490 | orchestrator | 2025-09-19 11:27:46 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:27:46.913189 | orchestrator | 2025-09-19 11:27:46 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:27:46.913218 | orchestrator | 2025-09-19 11:27:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:50.072024 | orchestrator | 2025-09-19 11:27:50 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:27:50.075735 | orchestrator | 2025-09-19 11:27:50 | INFO  | Task d3d5089d-4f4d-432a-88ca-8eddc5c328e0 is in state STARTED 2025-09-19 11:27:50.079221 | orchestrator | 2025-09-19 11:27:50 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:50.081686 | orchestrator | 2025-09-19 11:27:50 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:50.084719 | orchestrator | 2025-09-19 11:27:50 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:27:50.087331 | orchestrator | 2025-09-19 11:27:50 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:27:50.087727 | orchestrator | 2025-09-19 11:27:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:53.123098 | orchestrator | 2025-09-19 11:27:53 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:27:53.124469 | orchestrator | 2025-09-19 11:27:53 | INFO  | Task d3d5089d-4f4d-432a-88ca-8eddc5c328e0 is in state SUCCESS 2025-09-19 11:27:53.125923 | orchestrator | 2025-09-19 11:27:53.125969 | orchestrator | 2025-09-19 11:27:53.125987 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:27:53.125999 | orchestrator | 2025-09-19 11:27:53.126010 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:27:53.126077 | orchestrator | Friday 19 September 2025 11:27:24 +0000 (0:00:00.257) 0:00:00.257 ****** 2025-09-19 11:27:53.126088 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:27:53.126099 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:27:53.126110 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:27:53.126120 | orchestrator | 2025-09-19 11:27:53.126131 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:27:53.126142 | orchestrator | Friday 19 September 2025 11:27:24 +0000 (0:00:00.457) 0:00:00.715 ****** 2025-09-19 11:27:53.126153 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-19 11:27:53.126164 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-19 11:27:53.126174 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-19 11:27:53.126185 | orchestrator | 2025-09-19 11:27:53.126196 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-19 11:27:53.126206 | orchestrator | 2025-09-19 11:27:53.126217 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-19 11:27:53.126227 | orchestrator | Friday 19 September 2025 11:27:25 +0000 (0:00:00.545) 0:00:01.261 ****** 2025-09-19 11:27:53.126238 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:27:53.126249 | orchestrator | 2025-09-19 11:27:53.126260 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-19 11:27:53.126271 | orchestrator | Friday 19 September 2025 11:27:26 +0000 (0:00:00.553) 0:00:01.815 ****** 2025-09-19 11:27:53.126281 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-19 11:27:53.126292 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-19 11:27:53.126302 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-19 11:27:53.126313 | orchestrator | 2025-09-19 11:27:53.126323 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-19 11:27:53.126334 | orchestrator | Friday 19 September 2025 11:27:26 +0000 (0:00:00.873) 0:00:02.688 ****** 2025-09-19 11:27:53.126345 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-19 11:27:53.126379 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-19 11:27:53.126390 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-19 11:27:53.126401 | orchestrator | 2025-09-19 11:27:53.126412 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-19 11:27:53.126422 | orchestrator | Friday 19 September 2025 11:27:29 +0000 (0:00:02.179) 0:00:04.867 ****** 2025-09-19 11:27:53.126433 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:27:53.126443 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:27:53.126454 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:27:53.126464 | orchestrator | 2025-09-19 11:27:53.126475 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-19 11:27:53.126486 | orchestrator | Friday 19 September 2025 11:27:31 +0000 (0:00:02.559) 0:00:07.426 ****** 2025-09-19 11:27:53.126496 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:27:53.126507 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:27:53.126542 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:27:53.126553 | orchestrator | 2025-09-19 11:27:53.126564 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:27:53.126587 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:27:53.126599 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:27:53.126610 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:27:53.126621 | orchestrator | 2025-09-19 11:27:53.126631 | orchestrator | 2025-09-19 11:27:53.126642 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:27:53.126653 | orchestrator | Friday 19 September 2025 11:27:38 +0000 (0:00:07.146) 0:00:14.573 ****** 2025-09-19 11:27:53.126663 | orchestrator | =============================================================================== 2025-09-19 11:27:53.126674 | orchestrator | memcached : Restart memcached container --------------------------------- 7.15s 2025-09-19 11:27:53.126684 | orchestrator | memcached : Check memcached container ----------------------------------- 2.56s 2025-09-19 11:27:53.126695 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.18s 2025-09-19 11:27:53.126705 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.87s 2025-09-19 11:27:53.126716 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.55s 2025-09-19 11:27:53.126726 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-09-19 11:27:53.126737 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.46s 2025-09-19 11:27:53.126747 | orchestrator | 2025-09-19 11:27:53.126758 | orchestrator | 2025-09-19 11:27:53.126768 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:27:53.126779 | orchestrator | 2025-09-19 11:27:53.126789 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:27:53.126800 | orchestrator | Friday 19 September 2025 11:27:25 +0000 (0:00:00.456) 0:00:00.456 ****** 2025-09-19 11:27:53.126810 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:27:53.126821 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:27:53.126831 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:27:53.126842 | orchestrator | 2025-09-19 11:27:53.126852 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:27:53.126875 | orchestrator | Friday 19 September 2025 11:27:25 +0000 (0:00:00.324) 0:00:00.780 ****** 2025-09-19 11:27:53.126886 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-19 11:27:53.126897 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-19 11:27:53.126907 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-19 11:27:53.126918 | orchestrator | 2025-09-19 11:27:53.126928 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-19 11:27:53.126947 | orchestrator | 2025-09-19 11:27:53.126958 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-19 11:27:53.126969 | orchestrator | Friday 19 September 2025 11:27:26 +0000 (0:00:00.417) 0:00:01.198 ****** 2025-09-19 11:27:53.126980 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:27:53.126994 | orchestrator | 2025-09-19 11:27:53.127012 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-19 11:27:53.127032 | orchestrator | Friday 19 September 2025 11:27:26 +0000 (0:00:00.615) 0:00:01.814 ****** 2025-09-19 11:27:53.127052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127318 | orchestrator | 2025-09-19 11:27:53.127330 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-19 11:27:53.127341 | orchestrator | Friday 19 September 2025 11:27:28 +0000 (0:00:01.353) 0:00:03.167 ****** 2025-09-19 11:27:53.127353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127441 | orchestrator | 2025-09-19 11:27:53.127455 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-19 11:27:53.127475 | orchestrator | Friday 19 September 2025 11:27:30 +0000 (0:00:02.812) 0:00:05.980 ****** 2025-09-19 11:27:53.127494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127708 | orchestrator | 2025-09-19 11:27:53.127727 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-19 11:27:53.127746 | orchestrator | Friday 19 September 2025 11:27:34 +0000 (0:00:03.826) 0:00:09.807 ****** 2025-09-19 11:27:53.127765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 11:27:53.127890 | orchestrator | 2025-09-19 11:27:53.127909 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 11:27:53.127934 | orchestrator | Friday 19 September 2025 11:27:36 +0000 (0:00:02.027) 0:00:11.834 ****** 2025-09-19 11:27:53.127958 | orchestrator | 2025-09-19 11:27:53.127977 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 11:27:53.127996 | orchestrator | Friday 19 September 2025 11:27:36 +0000 (0:00:00.063) 0:00:11.897 ****** 2025-09-19 11:27:53.128014 | orchestrator | 2025-09-19 11:27:53.128034 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 11:27:53.128053 | orchestrator | Friday 19 September 2025 11:27:36 +0000 (0:00:00.057) 0:00:11.955 ****** 2025-09-19 11:27:53.128068 | orchestrator | 2025-09-19 11:27:53.128080 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-19 11:27:53.128093 | orchestrator | Friday 19 September 2025 11:27:36 +0000 (0:00:00.066) 0:00:12.021 ****** 2025-09-19 11:27:53.128105 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:27:53.128118 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:27:53.128130 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:27:53.128145 | orchestrator | 2025-09-19 11:27:53.128164 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-19 11:27:53.128183 | orchestrator | Friday 19 September 2025 11:27:40 +0000 (0:00:03.685) 0:00:15.706 ****** 2025-09-19 11:27:53.128201 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:27:53.128227 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:27:53.128248 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:27:53.128266 | orchestrator | 2025-09-19 11:27:53.128285 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:27:53.128303 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:27:53.128322 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:27:53.128341 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:27:53.128360 | orchestrator | 2025-09-19 11:27:53.128378 | orchestrator | 2025-09-19 11:27:53.128396 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:27:53.128415 | orchestrator | Friday 19 September 2025 11:27:50 +0000 (0:00:09.644) 0:00:25.350 ****** 2025-09-19 11:27:53.128434 | orchestrator | =============================================================================== 2025-09-19 11:27:53.128452 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.64s 2025-09-19 11:27:53.128469 | orchestrator | redis : Copying over redis config files --------------------------------- 3.83s 2025-09-19 11:27:53.128480 | orchestrator | redis : Restart redis container ----------------------------------------- 3.69s 2025-09-19 11:27:53.128501 | orchestrator | redis : Copying over default config.json files -------------------------- 2.81s 2025-09-19 11:27:53.128542 | orchestrator | redis : Check redis containers ------------------------------------------ 2.03s 2025-09-19 11:27:53.128554 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.35s 2025-09-19 11:27:53.128566 | orchestrator | redis : include_tasks --------------------------------------------------- 0.62s 2025-09-19 11:27:53.128576 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-09-19 11:27:53.128587 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-09-19 11:27:53.128597 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.19s 2025-09-19 11:27:53.128608 | orchestrator | 2025-09-19 11:27:53 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:53.128619 | orchestrator | 2025-09-19 11:27:53 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:53.129215 | orchestrator | 2025-09-19 11:27:53 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:27:53.130101 | orchestrator | 2025-09-19 11:27:53 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:27:53.130194 | orchestrator | 2025-09-19 11:27:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:56.214781 | orchestrator | 2025-09-19 11:27:56 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:27:56.214879 | orchestrator | 2025-09-19 11:27:56 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:56.214891 | orchestrator | 2025-09-19 11:27:56 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:56.214901 | orchestrator | 2025-09-19 11:27:56 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:27:56.214910 | orchestrator | 2025-09-19 11:27:56 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:27:56.214920 | orchestrator | 2025-09-19 11:27:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:27:59.234722 | orchestrator | 2025-09-19 11:27:59 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:27:59.234823 | orchestrator | 2025-09-19 11:27:59 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:27:59.234838 | orchestrator | 2025-09-19 11:27:59 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:27:59.237673 | orchestrator | 2025-09-19 11:27:59 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:27:59.237731 | orchestrator | 2025-09-19 11:27:59 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:27:59.237743 | orchestrator | 2025-09-19 11:27:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:02.301634 | orchestrator | 2025-09-19 11:28:02 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:02.301715 | orchestrator | 2025-09-19 11:28:02 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:28:02.301729 | orchestrator | 2025-09-19 11:28:02 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:02.301740 | orchestrator | 2025-09-19 11:28:02 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:28:02.301751 | orchestrator | 2025-09-19 11:28:02 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:02.301762 | orchestrator | 2025-09-19 11:28:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:05.537153 | orchestrator | 2025-09-19 11:28:05 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:05.582409 | orchestrator | 2025-09-19 11:28:05 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:28:05.582482 | orchestrator | 2025-09-19 11:28:05 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:05.583660 | orchestrator | 2025-09-19 11:28:05 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:28:05.588465 | orchestrator | 2025-09-19 11:28:05 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:05.588557 | orchestrator | 2025-09-19 11:28:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:08.614278 | orchestrator | 2025-09-19 11:28:08 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:08.615608 | orchestrator | 2025-09-19 11:28:08 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:28:08.616289 | orchestrator | 2025-09-19 11:28:08 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:08.616977 | orchestrator | 2025-09-19 11:28:08 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:28:08.618945 | orchestrator | 2025-09-19 11:28:08 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:08.619014 | orchestrator | 2025-09-19 11:28:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:11.664212 | orchestrator | 2025-09-19 11:28:11 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:11.666230 | orchestrator | 2025-09-19 11:28:11 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:28:11.666722 | orchestrator | 2025-09-19 11:28:11 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:11.668168 | orchestrator | 2025-09-19 11:28:11 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:28:11.670689 | orchestrator | 2025-09-19 11:28:11 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:11.670756 | orchestrator | 2025-09-19 11:28:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:14.694769 | orchestrator | 2025-09-19 11:28:14 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:14.695470 | orchestrator | 2025-09-19 11:28:14 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:28:14.696018 | orchestrator | 2025-09-19 11:28:14 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:14.696923 | orchestrator | 2025-09-19 11:28:14 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:28:14.697658 | orchestrator | 2025-09-19 11:28:14 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:14.697856 | orchestrator | 2025-09-19 11:28:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:17.751050 | orchestrator | 2025-09-19 11:28:17 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:17.752846 | orchestrator | 2025-09-19 11:28:17 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:28:17.755587 | orchestrator | 2025-09-19 11:28:17 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:17.756657 | orchestrator | 2025-09-19 11:28:17 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:28:17.758183 | orchestrator | 2025-09-19 11:28:17 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:17.758224 | orchestrator | 2025-09-19 11:28:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:20.792817 | orchestrator | 2025-09-19 11:28:20 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:20.793067 | orchestrator | 2025-09-19 11:28:20 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:28:20.794623 | orchestrator | 2025-09-19 11:28:20 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:20.795402 | orchestrator | 2025-09-19 11:28:20 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:28:20.796542 | orchestrator | 2025-09-19 11:28:20 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:20.796581 | orchestrator | 2025-09-19 11:28:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:23.888796 | orchestrator | 2025-09-19 11:28:23 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:23.889446 | orchestrator | 2025-09-19 11:28:23 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:28:23.896149 | orchestrator | 2025-09-19 11:28:23 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:23.896971 | orchestrator | 2025-09-19 11:28:23 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:28:23.899931 | orchestrator | 2025-09-19 11:28:23 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:23.899957 | orchestrator | 2025-09-19 11:28:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:27.223962 | orchestrator | 2025-09-19 11:28:27 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:27.235956 | orchestrator | 2025-09-19 11:28:27 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:28:27.236049 | orchestrator | 2025-09-19 11:28:27 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:27.236084 | orchestrator | 2025-09-19 11:28:27 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:28:27.236096 | orchestrator | 2025-09-19 11:28:27 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:27.236108 | orchestrator | 2025-09-19 11:28:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:30.454770 | orchestrator | 2025-09-19 11:28:30 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:30.454850 | orchestrator | 2025-09-19 11:28:30 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:28:30.465386 | orchestrator | 2025-09-19 11:28:30 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:30.468024 | orchestrator | 2025-09-19 11:28:30 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:28:30.474175 | orchestrator | 2025-09-19 11:28:30 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:30.474221 | orchestrator | 2025-09-19 11:28:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:33.643776 | orchestrator | 2025-09-19 11:28:33 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:33.643859 | orchestrator | 2025-09-19 11:28:33 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:28:33.643873 | orchestrator | 2025-09-19 11:28:33 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:33.643907 | orchestrator | 2025-09-19 11:28:33 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:28:33.643919 | orchestrator | 2025-09-19 11:28:33 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:33.643929 | orchestrator | 2025-09-19 11:28:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:36.677775 | orchestrator | 2025-09-19 11:28:36 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:36.677920 | orchestrator | 2025-09-19 11:28:36 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state STARTED 2025-09-19 11:28:36.678466 | orchestrator | 2025-09-19 11:28:36 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:36.679043 | orchestrator | 2025-09-19 11:28:36 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state STARTED 2025-09-19 11:28:36.679620 | orchestrator | 2025-09-19 11:28:36 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:36.679691 | orchestrator | 2025-09-19 11:28:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:39.718671 | orchestrator | 2025-09-19 11:28:39 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:39.718892 | orchestrator | 2025-09-19 11:28:39 | INFO  | Task 975b611a-f1c1-4060-80e5-ec9495fc0db4 is in state SUCCESS 2025-09-19 11:28:39.721006 | orchestrator | 2025-09-19 11:28:39.721069 | orchestrator | 2025-09-19 11:28:39.721082 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-19 11:28:39.721094 | orchestrator | 2025-09-19 11:28:39.721105 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-19 11:28:39.721118 | orchestrator | Friday 19 September 2025 11:24:51 +0000 (0:00:00.171) 0:00:00.171 ****** 2025-09-19 11:28:39.721129 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:28:39.721141 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:28:39.721152 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:39.721163 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.721173 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.721184 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.721195 | orchestrator | 2025-09-19 11:28:39.721206 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-19 11:28:39.721225 | orchestrator | Friday 19 September 2025 11:24:52 +0000 (0:00:00.688) 0:00:00.860 ****** 2025-09-19 11:28:39.721244 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.721262 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.721279 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.721299 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.721319 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.721337 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.721357 | orchestrator | 2025-09-19 11:28:39.721371 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-19 11:28:39.721382 | orchestrator | Friday 19 September 2025 11:24:52 +0000 (0:00:00.637) 0:00:01.498 ****** 2025-09-19 11:28:39.721393 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.721403 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.721414 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.721424 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.721511 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.721526 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.721538 | orchestrator | 2025-09-19 11:28:39.721550 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-19 11:28:39.721563 | orchestrator | Friday 19 September 2025 11:24:53 +0000 (0:00:00.683) 0:00:02.182 ****** 2025-09-19 11:28:39.721575 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:28:39.721621 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.721634 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:28:39.721646 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:28:39.721658 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.721670 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.721682 | orchestrator | 2025-09-19 11:28:39.721694 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-19 11:28:39.721707 | orchestrator | Friday 19 September 2025 11:24:55 +0000 (0:00:01.959) 0:00:04.142 ****** 2025-09-19 11:28:39.721719 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:28:39.721732 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:28:39.721743 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:28:39.721754 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.721764 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.721775 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.721785 | orchestrator | 2025-09-19 11:28:39.721796 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-19 11:28:39.721807 | orchestrator | Friday 19 September 2025 11:24:57 +0000 (0:00:01.520) 0:00:05.662 ****** 2025-09-19 11:28:39.721818 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:28:39.721829 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:28:39.721839 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:28:39.721850 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.721860 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.721870 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.721881 | orchestrator | 2025-09-19 11:28:39.721892 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-19 11:28:39.721902 | orchestrator | Friday 19 September 2025 11:24:58 +0000 (0:00:00.963) 0:00:06.626 ****** 2025-09-19 11:28:39.721913 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.721923 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.721934 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.721944 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.721955 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.721965 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.721976 | orchestrator | 2025-09-19 11:28:39.721986 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-19 11:28:39.721997 | orchestrator | Friday 19 September 2025 11:24:58 +0000 (0:00:00.455) 0:00:07.082 ****** 2025-09-19 11:28:39.722008 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.722086 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.722098 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.722109 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.722119 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.722130 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.722140 | orchestrator | 2025-09-19 11:28:39.722151 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-19 11:28:39.722162 | orchestrator | Friday 19 September 2025 11:24:59 +0000 (0:00:00.661) 0:00:07.743 ****** 2025-09-19 11:28:39.722173 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:28:39.722183 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:28:39.722194 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.722205 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:28:39.722219 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:28:39.722237 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:28:39.722257 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:28:39.722275 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.722293 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:28:39.722327 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:28:39.722368 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.722381 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:28:39.722392 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:28:39.722402 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.722413 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.722423 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:28:39.722496 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:28:39.722509 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.722520 | orchestrator | 2025-09-19 11:28:39.722531 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-19 11:28:39.722541 | orchestrator | Friday 19 September 2025 11:25:00 +0000 (0:00:00.806) 0:00:08.549 ****** 2025-09-19 11:28:39.722552 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.722562 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.722573 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.722584 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.722594 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.722605 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.722615 | orchestrator | 2025-09-19 11:28:39.722626 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-19 11:28:39.722638 | orchestrator | Friday 19 September 2025 11:25:01 +0000 (0:00:01.775) 0:00:10.325 ****** 2025-09-19 11:28:39.722649 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:28:39.722659 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:28:39.722670 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:39.722681 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.722691 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.722702 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.722712 | orchestrator | 2025-09-19 11:28:39.722723 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-19 11:28:39.722734 | orchestrator | Friday 19 September 2025 11:25:03 +0000 (0:00:01.297) 0:00:11.622 ****** 2025-09-19 11:28:39.722744 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.722762 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:28:39.722772 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:28:39.722783 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.722794 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:28:39.722804 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.722814 | orchestrator | 2025-09-19 11:28:39.722825 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-19 11:28:39.722836 | orchestrator | Friday 19 September 2025 11:25:08 +0000 (0:00:05.063) 0:00:16.686 ****** 2025-09-19 11:28:39.722847 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.722857 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.722867 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.722878 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.722889 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.722899 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.722910 | orchestrator | 2025-09-19 11:28:39.722921 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-19 11:28:39.722931 | orchestrator | Friday 19 September 2025 11:25:09 +0000 (0:00:01.464) 0:00:18.150 ****** 2025-09-19 11:28:39.722942 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.722953 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.722963 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.722974 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.722984 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.722995 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.723015 | orchestrator | 2025-09-19 11:28:39.723026 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-19 11:28:39.723039 | orchestrator | Friday 19 September 2025 11:25:12 +0000 (0:00:02.861) 0:00:21.012 ****** 2025-09-19 11:28:39.723050 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:28:39.723060 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:28:39.723071 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:39.723082 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.723092 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.723103 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.723113 | orchestrator | 2025-09-19 11:28:39.723122 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-19 11:28:39.723132 | orchestrator | Friday 19 September 2025 11:25:13 +0000 (0:00:01.137) 0:00:22.149 ****** 2025-09-19 11:28:39.723141 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-19 11:28:39.723151 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-19 11:28:39.723160 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-19 11:28:39.723170 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-19 11:28:39.723179 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-19 11:28:39.723188 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-19 11:28:39.723198 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-19 11:28:39.723240 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-19 11:28:39.723258 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-19 11:28:39.723275 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-19 11:28:39.723286 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-19 11:28:39.723295 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-19 11:28:39.723304 | orchestrator | 2025-09-19 11:28:39.723314 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-19 11:28:39.723324 | orchestrator | Friday 19 September 2025 11:25:16 +0000 (0:00:02.747) 0:00:24.896 ****** 2025-09-19 11:28:39.723333 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:28:39.723342 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:28:39.723352 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:28:39.723361 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.723371 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.723380 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.723389 | orchestrator | 2025-09-19 11:28:39.723407 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-19 11:28:39.723417 | orchestrator | 2025-09-19 11:28:39.723427 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-19 11:28:39.723457 | orchestrator | Friday 19 September 2025 11:25:18 +0000 (0:00:02.119) 0:00:27.016 ****** 2025-09-19 11:28:39.723467 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.723477 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.723486 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.723496 | orchestrator | 2025-09-19 11:28:39.723505 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-19 11:28:39.723515 | orchestrator | Friday 19 September 2025 11:25:19 +0000 (0:00:00.998) 0:00:28.015 ****** 2025-09-19 11:28:39.723524 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.723534 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.723544 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.723553 | orchestrator | 2025-09-19 11:28:39.723563 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-19 11:28:39.723573 | orchestrator | Friday 19 September 2025 11:25:20 +0000 (0:00:01.119) 0:00:29.134 ****** 2025-09-19 11:28:39.723583 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.723592 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.723602 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.723611 | orchestrator | 2025-09-19 11:28:39.723630 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-19 11:28:39.723640 | orchestrator | Friday 19 September 2025 11:25:21 +0000 (0:00:01.173) 0:00:30.307 ****** 2025-09-19 11:28:39.723649 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.723659 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.723668 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.723678 | orchestrator | 2025-09-19 11:28:39.723688 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-19 11:28:39.723698 | orchestrator | Friday 19 September 2025 11:25:22 +0000 (0:00:01.057) 0:00:31.365 ****** 2025-09-19 11:28:39.723708 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.723717 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.723727 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.723737 | orchestrator | 2025-09-19 11:28:39.723752 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-19 11:28:39.723761 | orchestrator | Friday 19 September 2025 11:25:23 +0000 (0:00:00.379) 0:00:31.744 ****** 2025-09-19 11:28:39.723771 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.723781 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.723790 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.723800 | orchestrator | 2025-09-19 11:28:39.723810 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-19 11:28:39.723820 | orchestrator | Friday 19 September 2025 11:25:23 +0000 (0:00:00.645) 0:00:32.390 ****** 2025-09-19 11:28:39.723830 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.723839 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.723849 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.723859 | orchestrator | 2025-09-19 11:28:39.723868 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-19 11:28:39.723878 | orchestrator | Friday 19 September 2025 11:25:25 +0000 (0:00:01.549) 0:00:33.940 ****** 2025-09-19 11:28:39.723888 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:28:39.723898 | orchestrator | 2025-09-19 11:28:39.723907 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-19 11:28:39.723917 | orchestrator | Friday 19 September 2025 11:25:25 +0000 (0:00:00.486) 0:00:34.426 ****** 2025-09-19 11:28:39.723927 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.723936 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.723946 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.723956 | orchestrator | 2025-09-19 11:28:39.723965 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-19 11:28:39.723975 | orchestrator | Friday 19 September 2025 11:25:27 +0000 (0:00:02.029) 0:00:36.456 ****** 2025-09-19 11:28:39.723985 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.723994 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.724004 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.724013 | orchestrator | 2025-09-19 11:28:39.724023 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-19 11:28:39.724033 | orchestrator | Friday 19 September 2025 11:25:28 +0000 (0:00:00.589) 0:00:37.045 ****** 2025-09-19 11:28:39.724042 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.724052 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.724062 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.724071 | orchestrator | 2025-09-19 11:28:39.724081 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-19 11:28:39.724090 | orchestrator | Friday 19 September 2025 11:25:29 +0000 (0:00:00.992) 0:00:38.038 ****** 2025-09-19 11:28:39.724100 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.724109 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.724119 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.724128 | orchestrator | 2025-09-19 11:28:39.724138 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-19 11:28:39.724148 | orchestrator | Friday 19 September 2025 11:25:30 +0000 (0:00:01.401) 0:00:39.439 ****** 2025-09-19 11:28:39.724166 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.724176 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.724185 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.724195 | orchestrator | 2025-09-19 11:28:39.724205 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-19 11:28:39.724215 | orchestrator | Friday 19 September 2025 11:25:31 +0000 (0:00:00.493) 0:00:39.933 ****** 2025-09-19 11:28:39.724224 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.724234 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.724243 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.724253 | orchestrator | 2025-09-19 11:28:39.724263 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-19 11:28:39.724272 | orchestrator | Friday 19 September 2025 11:25:32 +0000 (0:00:00.839) 0:00:40.772 ****** 2025-09-19 11:28:39.724282 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.724291 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.724301 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.724310 | orchestrator | 2025-09-19 11:28:39.724325 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-19 11:28:39.724336 | orchestrator | Friday 19 September 2025 11:25:34 +0000 (0:00:02.328) 0:00:43.100 ****** 2025-09-19 11:28:39.724345 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 11:28:39.724356 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 11:28:39.724366 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 11:28:39.724376 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 11:28:39.724386 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 11:28:39.724395 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 11:28:39.724405 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 11:28:39.724415 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 11:28:39.724429 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 11:28:39.724458 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 11:28:39.724468 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 11:28:39.724478 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 11:28:39.724488 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 11:28:39.724498 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 11:28:39.724507 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 11:28:39.724517 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.724533 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.724543 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.724553 | orchestrator | 2025-09-19 11:28:39.724563 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-19 11:28:39.724573 | orchestrator | Friday 19 September 2025 11:26:30 +0000 (0:00:55.827) 0:01:38.928 ****** 2025-09-19 11:28:39.724583 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.724593 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.724602 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.724611 | orchestrator | 2025-09-19 11:28:39.724621 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-19 11:28:39.724631 | orchestrator | Friday 19 September 2025 11:26:30 +0000 (0:00:00.506) 0:01:39.435 ****** 2025-09-19 11:28:39.724641 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.724650 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.724660 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.724669 | orchestrator | 2025-09-19 11:28:39.724679 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-19 11:28:39.724689 | orchestrator | Friday 19 September 2025 11:26:32 +0000 (0:00:01.370) 0:01:40.806 ****** 2025-09-19 11:28:39.724698 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.724708 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.724717 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.724727 | orchestrator | 2025-09-19 11:28:39.724737 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-19 11:28:39.724746 | orchestrator | Friday 19 September 2025 11:26:33 +0000 (0:00:01.296) 0:01:42.103 ****** 2025-09-19 11:28:39.724756 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.724766 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.724775 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.724785 | orchestrator | 2025-09-19 11:28:39.724795 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-19 11:28:39.724804 | orchestrator | Friday 19 September 2025 11:26:59 +0000 (0:00:25.589) 0:02:07.692 ****** 2025-09-19 11:28:39.724814 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.724824 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.724833 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.724843 | orchestrator | 2025-09-19 11:28:39.724853 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-19 11:28:39.724862 | orchestrator | Friday 19 September 2025 11:26:59 +0000 (0:00:00.629) 0:02:08.322 ****** 2025-09-19 11:28:39.724872 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.724882 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.724891 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.724901 | orchestrator | 2025-09-19 11:28:39.724919 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-19 11:28:39.724936 | orchestrator | Friday 19 September 2025 11:27:00 +0000 (0:00:00.816) 0:02:09.139 ****** 2025-09-19 11:28:39.724952 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.724968 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.724985 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.725003 | orchestrator | 2025-09-19 11:28:39.725020 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-19 11:28:39.725035 | orchestrator | Friday 19 September 2025 11:27:01 +0000 (0:00:00.642) 0:02:09.782 ****** 2025-09-19 11:28:39.725045 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.725055 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.725064 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.725074 | orchestrator | 2025-09-19 11:28:39.725084 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-19 11:28:39.725093 | orchestrator | Friday 19 September 2025 11:27:01 +0000 (0:00:00.574) 0:02:10.357 ****** 2025-09-19 11:28:39.725103 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.725113 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.725122 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.725138 | orchestrator | 2025-09-19 11:28:39.725148 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-19 11:28:39.725157 | orchestrator | Friday 19 September 2025 11:27:02 +0000 (0:00:00.257) 0:02:10.615 ****** 2025-09-19 11:28:39.725167 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.725176 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.725186 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.725195 | orchestrator | 2025-09-19 11:28:39.725205 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-19 11:28:39.725215 | orchestrator | Friday 19 September 2025 11:27:02 +0000 (0:00:00.745) 0:02:11.360 ****** 2025-09-19 11:28:39.725224 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.725234 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.725244 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.725253 | orchestrator | 2025-09-19 11:28:39.725263 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-19 11:28:39.725277 | orchestrator | Friday 19 September 2025 11:27:03 +0000 (0:00:00.589) 0:02:11.949 ****** 2025-09-19 11:28:39.725294 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.725308 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.725322 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.725337 | orchestrator | 2025-09-19 11:28:39.725353 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-19 11:28:39.725369 | orchestrator | Friday 19 September 2025 11:27:04 +0000 (0:00:00.860) 0:02:12.810 ****** 2025-09-19 11:28:39.725386 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.725402 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.725418 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.725428 | orchestrator | 2025-09-19 11:28:39.725486 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-19 11:28:39.725496 | orchestrator | Friday 19 September 2025 11:27:05 +0000 (0:00:00.823) 0:02:13.634 ****** 2025-09-19 11:28:39.725506 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.725515 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.725525 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.725534 | orchestrator | 2025-09-19 11:28:39.725544 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-19 11:28:39.725553 | orchestrator | Friday 19 September 2025 11:27:05 +0000 (0:00:00.402) 0:02:14.036 ****** 2025-09-19 11:28:39.725563 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.725573 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.725582 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.725592 | orchestrator | 2025-09-19 11:28:39.725601 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-19 11:28:39.725611 | orchestrator | Friday 19 September 2025 11:27:05 +0000 (0:00:00.313) 0:02:14.349 ****** 2025-09-19 11:28:39.725620 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.725630 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.725639 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.725649 | orchestrator | 2025-09-19 11:28:39.725659 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-19 11:28:39.725669 | orchestrator | Friday 19 September 2025 11:27:06 +0000 (0:00:00.626) 0:02:14.976 ****** 2025-09-19 11:28:39.725678 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.725688 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.725697 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.725707 | orchestrator | 2025-09-19 11:28:39.725717 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-19 11:28:39.725727 | orchestrator | Friday 19 September 2025 11:27:07 +0000 (0:00:00.699) 0:02:15.676 ****** 2025-09-19 11:28:39.725737 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 11:28:39.725747 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 11:28:39.725770 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 11:28:39.725780 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 11:28:39.725789 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 11:28:39.725799 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 11:28:39.725808 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 11:28:39.725818 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 11:28:39.725828 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 11:28:39.725845 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-19 11:28:39.725855 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 11:28:39.725864 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 11:28:39.725874 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-19 11:28:39.725883 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 11:28:39.725893 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 11:28:39.725902 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 11:28:39.725912 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 11:28:39.725921 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 11:28:39.725931 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 11:28:39.725941 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 11:28:39.725950 | orchestrator | 2025-09-19 11:28:39.725960 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-19 11:28:39.725969 | orchestrator | 2025-09-19 11:28:39.725979 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-19 11:28:39.725988 | orchestrator | Friday 19 September 2025 11:27:10 +0000 (0:00:03.352) 0:02:19.029 ****** 2025-09-19 11:28:39.725998 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:28:39.726007 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:28:39.726048 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:39.726061 | orchestrator | 2025-09-19 11:28:39.726077 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-19 11:28:39.726087 | orchestrator | Friday 19 September 2025 11:27:10 +0000 (0:00:00.310) 0:02:19.339 ****** 2025-09-19 11:28:39.726096 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:28:39.726106 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:28:39.726115 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:39.726125 | orchestrator | 2025-09-19 11:28:39.726135 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-19 11:28:39.726145 | orchestrator | Friday 19 September 2025 11:27:11 +0000 (0:00:00.619) 0:02:19.959 ****** 2025-09-19 11:28:39.726154 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:28:39.726164 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:28:39.726173 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:39.726183 | orchestrator | 2025-09-19 11:28:39.726193 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-19 11:28:39.726203 | orchestrator | Friday 19 September 2025 11:27:12 +0000 (0:00:00.567) 0:02:20.526 ****** 2025-09-19 11:28:39.726212 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:28:39.726229 | orchestrator | 2025-09-19 11:28:39.726239 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-19 11:28:39.726249 | orchestrator | Friday 19 September 2025 11:27:12 +0000 (0:00:00.444) 0:02:20.971 ****** 2025-09-19 11:28:39.726258 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.726268 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.726277 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.726287 | orchestrator | 2025-09-19 11:28:39.726296 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-19 11:28:39.726306 | orchestrator | Friday 19 September 2025 11:27:12 +0000 (0:00:00.290) 0:02:21.261 ****** 2025-09-19 11:28:39.726315 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.726325 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.726334 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.726344 | orchestrator | 2025-09-19 11:28:39.726354 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-19 11:28:39.726363 | orchestrator | Friday 19 September 2025 11:27:13 +0000 (0:00:00.417) 0:02:21.679 ****** 2025-09-19 11:28:39.726373 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.726382 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.726392 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.726493 | orchestrator | 2025-09-19 11:28:39.726508 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-19 11:28:39.726518 | orchestrator | Friday 19 September 2025 11:27:13 +0000 (0:00:00.273) 0:02:21.952 ****** 2025-09-19 11:28:39.726528 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:28:39.726537 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:28:39.726546 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:39.726556 | orchestrator | 2025-09-19 11:28:39.726566 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-19 11:28:39.726575 | orchestrator | Friday 19 September 2025 11:27:14 +0000 (0:00:00.650) 0:02:22.602 ****** 2025-09-19 11:28:39.726585 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:28:39.726594 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:28:39.726603 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:28:39.726613 | orchestrator | 2025-09-19 11:28:39.726623 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-19 11:28:39.726632 | orchestrator | Friday 19 September 2025 11:27:15 +0000 (0:00:01.157) 0:02:23.760 ****** 2025-09-19 11:28:39.726642 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:28:39.726652 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:28:39.726661 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:28:39.726671 | orchestrator | 2025-09-19 11:28:39.726680 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-19 11:28:39.726690 | orchestrator | Friday 19 September 2025 11:27:16 +0000 (0:00:01.546) 0:02:25.306 ****** 2025-09-19 11:28:39.726699 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:28:39.726709 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:28:39.726718 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:28:39.726728 | orchestrator | 2025-09-19 11:28:39.726745 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-19 11:28:39.726756 | orchestrator | 2025-09-19 11:28:39.726766 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-19 11:28:39.726775 | orchestrator | Friday 19 September 2025 11:27:29 +0000 (0:00:12.535) 0:02:37.841 ****** 2025-09-19 11:28:39.726792 | orchestrator | ok: [testbed-manager] 2025-09-19 11:28:39.726807 | orchestrator | 2025-09-19 11:28:39.726833 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-19 11:28:39.726849 | orchestrator | Friday 19 September 2025 11:27:30 +0000 (0:00:00.766) 0:02:38.608 ****** 2025-09-19 11:28:39.726864 | orchestrator | changed: [testbed-manager] 2025-09-19 11:28:39.726879 | orchestrator | 2025-09-19 11:28:39.726895 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 11:28:39.726923 | orchestrator | Friday 19 September 2025 11:27:30 +0000 (0:00:00.391) 0:02:39.000 ****** 2025-09-19 11:28:39.726937 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 11:28:39.726951 | orchestrator | 2025-09-19 11:28:39.726964 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 11:28:39.726980 | orchestrator | Friday 19 September 2025 11:27:31 +0000 (0:00:00.562) 0:02:39.562 ****** 2025-09-19 11:28:39.726997 | orchestrator | changed: [testbed-manager] 2025-09-19 11:28:39.727008 | orchestrator | 2025-09-19 11:28:39.727021 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-19 11:28:39.727034 | orchestrator | Friday 19 September 2025 11:27:32 +0000 (0:00:01.295) 0:02:40.857 ****** 2025-09-19 11:28:39.727046 | orchestrator | changed: [testbed-manager] 2025-09-19 11:28:39.727059 | orchestrator | 2025-09-19 11:28:39.727073 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-19 11:28:39.727087 | orchestrator | Friday 19 September 2025 11:27:33 +0000 (0:00:01.313) 0:02:42.171 ****** 2025-09-19 11:28:39.727100 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 11:28:39.727115 | orchestrator | 2025-09-19 11:28:39.727129 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-19 11:28:39.727150 | orchestrator | Friday 19 September 2025 11:27:35 +0000 (0:00:01.718) 0:02:43.890 ****** 2025-09-19 11:28:39.727159 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 11:28:39.727166 | orchestrator | 2025-09-19 11:28:39.727174 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-19 11:28:39.727182 | orchestrator | Friday 19 September 2025 11:27:36 +0000 (0:00:00.851) 0:02:44.741 ****** 2025-09-19 11:28:39.727190 | orchestrator | changed: [testbed-manager] 2025-09-19 11:28:39.727197 | orchestrator | 2025-09-19 11:28:39.727205 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-19 11:28:39.727213 | orchestrator | Friday 19 September 2025 11:27:36 +0000 (0:00:00.617) 0:02:45.359 ****** 2025-09-19 11:28:39.727221 | orchestrator | changed: [testbed-manager] 2025-09-19 11:28:39.727229 | orchestrator | 2025-09-19 11:28:39.727237 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-19 11:28:39.727245 | orchestrator | 2025-09-19 11:28:39.727252 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-19 11:28:39.727260 | orchestrator | Friday 19 September 2025 11:27:37 +0000 (0:00:00.425) 0:02:45.784 ****** 2025-09-19 11:28:39.727268 | orchestrator | ok: [testbed-manager] 2025-09-19 11:28:39.727275 | orchestrator | 2025-09-19 11:28:39.727283 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-19 11:28:39.727291 | orchestrator | Friday 19 September 2025 11:27:37 +0000 (0:00:00.153) 0:02:45.938 ****** 2025-09-19 11:28:39.727299 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 11:28:39.727306 | orchestrator | 2025-09-19 11:28:39.727314 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-19 11:28:39.727322 | orchestrator | Friday 19 September 2025 11:27:37 +0000 (0:00:00.223) 0:02:46.161 ****** 2025-09-19 11:28:39.727329 | orchestrator | ok: [testbed-manager] 2025-09-19 11:28:39.727337 | orchestrator | 2025-09-19 11:28:39.727345 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-19 11:28:39.727352 | orchestrator | Friday 19 September 2025 11:27:38 +0000 (0:00:01.054) 0:02:47.215 ****** 2025-09-19 11:28:39.727360 | orchestrator | ok: [testbed-manager] 2025-09-19 11:28:39.727368 | orchestrator | 2025-09-19 11:28:39.727375 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-19 11:28:39.727383 | orchestrator | Friday 19 September 2025 11:27:40 +0000 (0:00:01.668) 0:02:48.883 ****** 2025-09-19 11:28:39.727391 | orchestrator | changed: [testbed-manager] 2025-09-19 11:28:39.727399 | orchestrator | 2025-09-19 11:28:39.727407 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-19 11:28:39.727414 | orchestrator | Friday 19 September 2025 11:27:41 +0000 (0:00:00.755) 0:02:49.639 ****** 2025-09-19 11:28:39.727429 | orchestrator | ok: [testbed-manager] 2025-09-19 11:28:39.727458 | orchestrator | 2025-09-19 11:28:39.727466 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-19 11:28:39.727474 | orchestrator | Friday 19 September 2025 11:27:41 +0000 (0:00:00.463) 0:02:50.102 ****** 2025-09-19 11:28:39.727482 | orchestrator | changed: [testbed-manager] 2025-09-19 11:28:39.727489 | orchestrator | 2025-09-19 11:28:39.727497 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-19 11:28:39.727505 | orchestrator | Friday 19 September 2025 11:27:49 +0000 (0:00:07.819) 0:02:57.922 ****** 2025-09-19 11:28:39.727513 | orchestrator | changed: [testbed-manager] 2025-09-19 11:28:39.727521 | orchestrator | 2025-09-19 11:28:39.727529 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-19 11:28:39.727536 | orchestrator | Friday 19 September 2025 11:28:04 +0000 (0:00:15.003) 0:03:12.926 ****** 2025-09-19 11:28:39.727544 | orchestrator | ok: [testbed-manager] 2025-09-19 11:28:39.727552 | orchestrator | 2025-09-19 11:28:39.727560 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-19 11:28:39.727568 | orchestrator | 2025-09-19 11:28:39.727576 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-19 11:28:39.727591 | orchestrator | Friday 19 September 2025 11:28:05 +0000 (0:00:00.683) 0:03:13.610 ****** 2025-09-19 11:28:39.727599 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.727607 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.727615 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.727623 | orchestrator | 2025-09-19 11:28:39.727631 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-19 11:28:39.727638 | orchestrator | Friday 19 September 2025 11:28:05 +0000 (0:00:00.757) 0:03:14.368 ****** 2025-09-19 11:28:39.727646 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.727654 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.727662 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.727670 | orchestrator | 2025-09-19 11:28:39.727678 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-19 11:28:39.727686 | orchestrator | Friday 19 September 2025 11:28:06 +0000 (0:00:00.413) 0:03:14.781 ****** 2025-09-19 11:28:39.727693 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:28:39.727701 | orchestrator | 2025-09-19 11:28:39.727709 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-19 11:28:39.727717 | orchestrator | Friday 19 September 2025 11:28:06 +0000 (0:00:00.526) 0:03:15.308 ****** 2025-09-19 11:28:39.727725 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.727732 | orchestrator | 2025-09-19 11:28:39.727740 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-19 11:28:39.727748 | orchestrator | Friday 19 September 2025 11:28:07 +0000 (0:00:00.225) 0:03:15.533 ****** 2025-09-19 11:28:39.727756 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.727764 | orchestrator | 2025-09-19 11:28:39.727772 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-19 11:28:39.727779 | orchestrator | Friday 19 September 2025 11:28:07 +0000 (0:00:00.220) 0:03:15.754 ****** 2025-09-19 11:28:39.727787 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.727795 | orchestrator | 2025-09-19 11:28:39.727803 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-19 11:28:39.727811 | orchestrator | Friday 19 September 2025 11:28:07 +0000 (0:00:00.550) 0:03:16.305 ****** 2025-09-19 11:28:39.727823 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.727831 | orchestrator | 2025-09-19 11:28:39.727839 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-19 11:28:39.727847 | orchestrator | Friday 19 September 2025 11:28:07 +0000 (0:00:00.188) 0:03:16.493 ****** 2025-09-19 11:28:39.727855 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.727908 | orchestrator | 2025-09-19 11:28:39.727918 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-19 11:28:39.727926 | orchestrator | Friday 19 September 2025 11:28:08 +0000 (0:00:00.306) 0:03:16.800 ****** 2025-09-19 11:28:39.727933 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.727941 | orchestrator | 2025-09-19 11:28:39.727949 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-19 11:28:39.727957 | orchestrator | Friday 19 September 2025 11:28:08 +0000 (0:00:00.245) 0:03:17.045 ****** 2025-09-19 11:28:39.727964 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.727972 | orchestrator | 2025-09-19 11:28:39.727980 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-19 11:28:39.727988 | orchestrator | Friday 19 September 2025 11:28:08 +0000 (0:00:00.164) 0:03:17.210 ****** 2025-09-19 11:28:39.727995 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728003 | orchestrator | 2025-09-19 11:28:39.728011 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-19 11:28:39.728019 | orchestrator | Friday 19 September 2025 11:28:08 +0000 (0:00:00.156) 0:03:17.367 ****** 2025-09-19 11:28:39.728026 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728034 | orchestrator | 2025-09-19 11:28:39.728042 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-19 11:28:39.728050 | orchestrator | Friday 19 September 2025 11:28:09 +0000 (0:00:00.200) 0:03:17.567 ****** 2025-09-19 11:28:39.728058 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-19 11:28:39.728066 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-19 11:28:39.728074 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728081 | orchestrator | 2025-09-19 11:28:39.728089 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-19 11:28:39.728097 | orchestrator | Friday 19 September 2025 11:28:09 +0000 (0:00:00.245) 0:03:17.813 ****** 2025-09-19 11:28:39.728105 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728112 | orchestrator | 2025-09-19 11:28:39.728120 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-19 11:28:39.728128 | orchestrator | Friday 19 September 2025 11:28:09 +0000 (0:00:00.185) 0:03:17.998 ****** 2025-09-19 11:28:39.728136 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728144 | orchestrator | 2025-09-19 11:28:39.728152 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-19 11:28:39.728159 | orchestrator | Friday 19 September 2025 11:28:09 +0000 (0:00:00.163) 0:03:18.161 ****** 2025-09-19 11:28:39.728167 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728175 | orchestrator | 2025-09-19 11:28:39.728183 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-19 11:28:39.728190 | orchestrator | Friday 19 September 2025 11:28:09 +0000 (0:00:00.192) 0:03:18.354 ****** 2025-09-19 11:28:39.728198 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728206 | orchestrator | 2025-09-19 11:28:39.728214 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-19 11:28:39.728221 | orchestrator | Friday 19 September 2025 11:28:10 +0000 (0:00:00.206) 0:03:18.560 ****** 2025-09-19 11:28:39.728229 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728237 | orchestrator | 2025-09-19 11:28:39.728245 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-19 11:28:39.728252 | orchestrator | Friday 19 September 2025 11:28:10 +0000 (0:00:00.207) 0:03:18.768 ****** 2025-09-19 11:28:39.728283 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728292 | orchestrator | 2025-09-19 11:28:39.728300 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-19 11:28:39.728313 | orchestrator | Friday 19 September 2025 11:28:10 +0000 (0:00:00.542) 0:03:19.310 ****** 2025-09-19 11:28:39.728321 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728329 | orchestrator | 2025-09-19 11:28:39.728337 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-19 11:28:39.728352 | orchestrator | Friday 19 September 2025 11:28:10 +0000 (0:00:00.194) 0:03:19.505 ****** 2025-09-19 11:28:39.728359 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728367 | orchestrator | 2025-09-19 11:28:39.728375 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-19 11:28:39.728383 | orchestrator | Friday 19 September 2025 11:28:11 +0000 (0:00:00.277) 0:03:19.783 ****** 2025-09-19 11:28:39.728390 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728398 | orchestrator | 2025-09-19 11:28:39.728406 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-19 11:28:39.728413 | orchestrator | Friday 19 September 2025 11:28:11 +0000 (0:00:00.263) 0:03:20.046 ****** 2025-09-19 11:28:39.728421 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728429 | orchestrator | 2025-09-19 11:28:39.728477 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-19 11:28:39.728485 | orchestrator | Friday 19 September 2025 11:28:11 +0000 (0:00:00.219) 0:03:20.265 ****** 2025-09-19 11:28:39.728493 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728501 | orchestrator | 2025-09-19 11:28:39.728508 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-19 11:28:39.728516 | orchestrator | Friday 19 September 2025 11:28:12 +0000 (0:00:00.293) 0:03:20.559 ****** 2025-09-19 11:28:39.728524 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-19 11:28:39.728532 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-19 11:28:39.728540 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-19 11:28:39.728547 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-19 11:28:39.728555 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728563 | orchestrator | 2025-09-19 11:28:39.728571 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-19 11:28:39.728579 | orchestrator | Friday 19 September 2025 11:28:12 +0000 (0:00:00.413) 0:03:20.972 ****** 2025-09-19 11:28:39.728594 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728602 | orchestrator | 2025-09-19 11:28:39.728610 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-19 11:28:39.728618 | orchestrator | Friday 19 September 2025 11:28:12 +0000 (0:00:00.245) 0:03:21.218 ****** 2025-09-19 11:28:39.728757 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728766 | orchestrator | 2025-09-19 11:28:39.728774 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-19 11:28:39.728782 | orchestrator | Friday 19 September 2025 11:28:12 +0000 (0:00:00.296) 0:03:21.515 ****** 2025-09-19 11:28:39.728790 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728798 | orchestrator | 2025-09-19 11:28:39.728806 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-19 11:28:39.728813 | orchestrator | Friday 19 September 2025 11:28:13 +0000 (0:00:00.270) 0:03:21.786 ****** 2025-09-19 11:28:39.728821 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728829 | orchestrator | 2025-09-19 11:28:39.728837 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-19 11:28:39.728845 | orchestrator | Friday 19 September 2025 11:28:13 +0000 (0:00:00.184) 0:03:21.971 ****** 2025-09-19 11:28:39.728872 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-19 11:28:39.728881 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-19 11:28:39.728889 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728897 | orchestrator | 2025-09-19 11:28:39.728905 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-19 11:28:39.728912 | orchestrator | Friday 19 September 2025 11:28:13 +0000 (0:00:00.516) 0:03:22.488 ****** 2025-09-19 11:28:39.728920 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.728928 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.728942 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.728950 | orchestrator | 2025-09-19 11:28:39.728958 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-19 11:28:39.728966 | orchestrator | Friday 19 September 2025 11:28:14 +0000 (0:00:00.512) 0:03:23.000 ****** 2025-09-19 11:28:39.728974 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.728981 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.728989 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.728997 | orchestrator | 2025-09-19 11:28:39.729008 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-19 11:28:39.729020 | orchestrator | 2025-09-19 11:28:39.729030 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-19 11:28:39.729041 | orchestrator | Friday 19 September 2025 11:28:15 +0000 (0:00:01.082) 0:03:24.082 ****** 2025-09-19 11:28:39.729052 | orchestrator | ok: [testbed-manager] 2025-09-19 11:28:39.729063 | orchestrator | 2025-09-19 11:28:39.729074 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-19 11:28:39.729086 | orchestrator | Friday 19 September 2025 11:28:15 +0000 (0:00:00.119) 0:03:24.202 ****** 2025-09-19 11:28:39.729097 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 11:28:39.729110 | orchestrator | 2025-09-19 11:28:39.729122 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-19 11:28:39.729133 | orchestrator | Friday 19 September 2025 11:28:16 +0000 (0:00:00.360) 0:03:24.563 ****** 2025-09-19 11:28:39.729145 | orchestrator | changed: [testbed-manager] 2025-09-19 11:28:39.729154 | orchestrator | 2025-09-19 11:28:39.729161 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-19 11:28:39.729167 | orchestrator | 2025-09-19 11:28:39.729174 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-19 11:28:39.729186 | orchestrator | Friday 19 September 2025 11:28:22 +0000 (0:00:06.237) 0:03:30.800 ****** 2025-09-19 11:28:39.729193 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:28:39.729200 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:28:39.729206 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.729213 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:39.729220 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.729226 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.729233 | orchestrator | 2025-09-19 11:28:39.729239 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-19 11:28:39.729246 | orchestrator | Friday 19 September 2025 11:28:23 +0000 (0:00:00.732) 0:03:31.533 ****** 2025-09-19 11:28:39.729252 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 11:28:39.729259 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 11:28:39.729266 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 11:28:39.729272 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 11:28:39.729305 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 11:28:39.729312 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 11:28:39.729319 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 11:28:39.729325 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 11:28:39.729332 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 11:28:39.729338 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 11:28:39.729345 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 11:28:39.729356 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 11:28:39.729371 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 11:28:39.729378 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 11:28:39.729384 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 11:28:39.729391 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 11:28:39.729397 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 11:28:39.729404 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 11:28:39.729411 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 11:28:39.729417 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 11:28:39.729424 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 11:28:39.729430 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 11:28:39.729449 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 11:28:39.729456 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 11:28:39.729463 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 11:28:39.729469 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 11:28:39.729476 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 11:28:39.729483 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 11:28:39.729489 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 11:28:39.729496 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 11:28:39.729502 | orchestrator | 2025-09-19 11:28:39.729509 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-19 11:28:39.729516 | orchestrator | Friday 19 September 2025 11:28:37 +0000 (0:00:14.510) 0:03:46.043 ****** 2025-09-19 11:28:39.729523 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.729529 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.729536 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.729542 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.729549 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.729556 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.729562 | orchestrator | 2025-09-19 11:28:39.729569 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-19 11:28:39.729576 | orchestrator | Friday 19 September 2025 11:28:38 +0000 (0:00:00.581) 0:03:46.624 ****** 2025-09-19 11:28:39.729582 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.729589 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.729595 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.729602 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.729608 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.729615 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.729621 | orchestrator | 2025-09-19 11:28:39.729628 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:28:39.729639 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:28:39.729649 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-19 11:28:39.729656 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 11:28:39.729668 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 11:28:39.729675 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 11:28:39.729681 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 11:28:39.729688 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 11:28:39.729695 | orchestrator | 2025-09-19 11:28:39.729701 | orchestrator | 2025-09-19 11:28:39.729708 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:28:39.729715 | orchestrator | Friday 19 September 2025 11:28:38 +0000 (0:00:00.611) 0:03:47.235 ****** 2025-09-19 11:28:39.729722 | orchestrator | =============================================================================== 2025-09-19 11:28:39.729728 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.83s 2025-09-19 11:28:39.729736 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.59s 2025-09-19 11:28:39.729746 | orchestrator | kubectl : Install required packages ------------------------------------ 15.00s 2025-09-19 11:28:39.729753 | orchestrator | Manage labels ---------------------------------------------------------- 14.51s 2025-09-19 11:28:39.729759 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.54s 2025-09-19 11:28:39.729766 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.82s 2025-09-19 11:28:39.729772 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.24s 2025-09-19 11:28:39.729779 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.06s 2025-09-19 11:28:39.729786 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.35s 2025-09-19 11:28:39.729792 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.86s 2025-09-19 11:28:39.729799 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.75s 2025-09-19 11:28:39.729805 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.33s 2025-09-19 11:28:39.729812 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.12s 2025-09-19 11:28:39.729819 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.03s 2025-09-19 11:28:39.729825 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.96s 2025-09-19 11:28:39.729832 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.78s 2025-09-19 11:28:39.729838 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.72s 2025-09-19 11:28:39.729845 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.67s 2025-09-19 11:28:39.729851 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.55s 2025-09-19 11:28:39.729858 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.55s 2025-09-19 11:28:39.729865 | orchestrator | 2025-09-19 11:28:39 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:28:39.730206 | orchestrator | 2025-09-19 11:28:39.730222 | orchestrator | 2025-09-19 11:28:39.730229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:28:39.730236 | orchestrator | 2025-09-19 11:28:39.730243 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:28:39.730249 | orchestrator | Friday 19 September 2025 11:27:24 +0000 (0:00:00.195) 0:00:00.195 ****** 2025-09-19 11:28:39.730256 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:28:39.730273 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:28:39.730279 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:39.730286 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.730293 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.730299 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.730306 | orchestrator | 2025-09-19 11:28:39.730313 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:28:39.730319 | orchestrator | Friday 19 September 2025 11:27:25 +0000 (0:00:00.714) 0:00:00.910 ****** 2025-09-19 11:28:39.730326 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 11:28:39.730333 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 11:28:39.730339 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 11:28:39.730346 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 11:28:39.730352 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 11:28:39.730359 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 11:28:39.730366 | orchestrator | 2025-09-19 11:28:39.730372 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-19 11:28:39.730379 | orchestrator | 2025-09-19 11:28:39.730385 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-19 11:28:39.730392 | orchestrator | Friday 19 September 2025 11:27:26 +0000 (0:00:00.979) 0:00:01.889 ****** 2025-09-19 11:28:39.730399 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:28:39.730407 | orchestrator | 2025-09-19 11:28:39.730413 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 11:28:39.730420 | orchestrator | Friday 19 September 2025 11:27:27 +0000 (0:00:01.421) 0:00:03.310 ****** 2025-09-19 11:28:39.730427 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-19 11:28:39.730447 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-19 11:28:39.730455 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-19 11:28:39.730462 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-19 11:28:39.730468 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-19 11:28:39.730475 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-19 11:28:39.730482 | orchestrator | 2025-09-19 11:28:39.730488 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 11:28:39.730495 | orchestrator | Friday 19 September 2025 11:27:29 +0000 (0:00:01.554) 0:00:04.864 ****** 2025-09-19 11:28:39.730501 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-19 11:28:39.730508 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-19 11:28:39.730514 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-19 11:28:39.730521 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-19 11:28:39.730532 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-19 11:28:39.730539 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-19 11:28:39.730545 | orchestrator | 2025-09-19 11:28:39.730552 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 11:28:39.730558 | orchestrator | Friday 19 September 2025 11:27:31 +0000 (0:00:02.172) 0:00:07.037 ****** 2025-09-19 11:28:39.730565 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-19 11:28:39.730625 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-19 11:28:39.730637 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.730643 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-19 11:28:39.730650 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.730662 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-19 11:28:39.730669 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.730675 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-19 11:28:39.730682 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.730688 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.730694 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-19 11:28:39.730701 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.730707 | orchestrator | 2025-09-19 11:28:39.730714 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-19 11:28:39.730721 | orchestrator | Friday 19 September 2025 11:27:34 +0000 (0:00:02.709) 0:00:09.746 ****** 2025-09-19 11:28:39.730727 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.730734 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.730740 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.730746 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.730753 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.730759 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.730766 | orchestrator | 2025-09-19 11:28:39.730773 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-19 11:28:39.730779 | orchestrator | Friday 19 September 2025 11:27:35 +0000 (0:00:01.423) 0:00:11.169 ****** 2025-09-19 11:28:39.730796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730815 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730848 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730910 | orchestrator | 2025-09-19 11:28:39.730917 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-19 11:28:39.730924 | orchestrator | Friday 19 September 2025 11:27:37 +0000 (0:00:02.040) 0:00:13.210 ****** 2025-09-19 11:28:39.730932 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730939 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.730994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731040 | orchestrator | 2025-09-19 11:28:39.731047 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-19 11:28:39.731054 | orchestrator | Friday 19 September 2025 11:27:40 +0000 (0:00:03.235) 0:00:16.445 ****** 2025-09-19 11:28:39.731061 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.731068 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.731074 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.731081 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:28:39.731087 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:28:39.731094 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:28:39.731100 | orchestrator | 2025-09-19 11:28:39.731107 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-19 11:28:39.731114 | orchestrator | Friday 19 September 2025 11:27:42 +0000 (0:00:02.005) 0:00:18.451 ****** 2025-09-19 11:28:39.731126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731136 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731144 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731190 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731197 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 11:28:39.731238 | orchestrator | 2025-09-19 11:28:39.731245 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 11:28:39.731251 | orchestrator | Friday 19 September 2025 11:27:45 +0000 (0:00:02.401) 0:00:20.852 ****** 2025-09-19 11:28:39.731258 | orchestrator | 2025-09-19 11:28:39.731265 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 11:28:39.731272 | orchestrator | Friday 19 September 2025 11:27:45 +0000 (0:00:00.259) 0:00:21.111 ****** 2025-09-19 11:28:39.731280 | orchestrator | 2025-09-19 11:28:39.731287 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 11:28:39.731294 | orchestrator | Friday 19 September 2025 11:27:45 +0000 (0:00:00.148) 0:00:21.260 ****** 2025-09-19 11:28:39.731302 | orchestrator | 2025-09-19 11:28:39.731309 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 11:28:39.731316 | orchestrator | Friday 19 September 2025 11:27:45 +0000 (0:00:00.130) 0:00:21.390 ****** 2025-09-19 11:28:39.731323 | orchestrator | 2025-09-19 11:28:39.731334 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 11:28:39.731342 | orchestrator | Friday 19 September 2025 11:27:45 +0000 (0:00:00.130) 0:00:21.521 ****** 2025-09-19 11:28:39.731349 | orchestrator | 2025-09-19 11:28:39.731357 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 11:28:39.731364 | orchestrator | Friday 19 September 2025 11:27:46 +0000 (0:00:00.254) 0:00:21.775 ****** 2025-09-19 11:28:39.731372 | orchestrator | 2025-09-19 11:28:39.731379 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-19 11:28:39.731386 | orchestrator | Friday 19 September 2025 11:27:46 +0000 (0:00:00.139) 0:00:21.914 ****** 2025-09-19 11:28:39.731394 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:28:39.731401 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:28:39.731409 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.731416 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.731423 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.731431 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:28:39.731482 | orchestrator | 2025-09-19 11:28:39.731490 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-19 11:28:39.731498 | orchestrator | Friday 19 September 2025 11:27:58 +0000 (0:00:12.036) 0:00:33.951 ****** 2025-09-19 11:28:39.731506 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:28:39.731513 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:28:39.731521 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:28:39.731528 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:28:39.731536 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:28:39.731543 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:28:39.731550 | orchestrator | 2025-09-19 11:28:39.731557 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-19 11:28:39.731565 | orchestrator | Friday 19 September 2025 11:28:01 +0000 (0:00:02.805) 0:00:36.757 ****** 2025-09-19 11:28:39.731572 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.731579 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.731586 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:28:39.731593 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:28:39.731601 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:28:39.731614 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.731621 | orchestrator | 2025-09-19 11:28:39.731629 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-19 11:28:39.731637 | orchestrator | Friday 19 September 2025 11:28:10 +0000 (0:00:09.835) 0:00:46.592 ****** 2025-09-19 11:28:39.731648 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-19 11:28:39.731655 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-19 11:28:39.731662 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-19 11:28:39.731669 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-19 11:28:39.731675 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-19 11:28:39.731682 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-19 11:28:39.731688 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-19 11:28:39.731695 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-19 11:28:39.731701 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-19 11:28:39.731708 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-19 11:28:39.731715 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-19 11:28:39.731721 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-19 11:28:39.731728 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 11:28:39.731734 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 11:28:39.731741 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 11:28:39.731747 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 11:28:39.731753 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 11:28:39.731759 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 11:28:39.731765 | orchestrator | 2025-09-19 11:28:39.731772 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-19 11:28:39.731778 | orchestrator | Friday 19 September 2025 11:28:18 +0000 (0:00:07.946) 0:00:54.539 ****** 2025-09-19 11:28:39.731784 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-19 11:28:39.731790 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.731796 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-19 11:28:39.731802 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.731808 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-19 11:28:39.731815 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.731821 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-19 11:28:39.731827 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-19 11:28:39.731833 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-19 11:28:39.731839 | orchestrator | 2025-09-19 11:28:39.731846 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-19 11:28:39.731856 | orchestrator | Friday 19 September 2025 11:28:23 +0000 (0:00:04.072) 0:00:58.611 ****** 2025-09-19 11:28:39.731862 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-19 11:28:39.731868 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:28:39.731875 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-19 11:28:39.731881 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:28:39.731887 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-19 11:28:39.731893 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:28:39.731899 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-19 11:28:39.731905 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-19 11:28:39.731911 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-19 11:28:39.731917 | orchestrator | 2025-09-19 11:28:39.731923 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-19 11:28:39.731929 | orchestrator | Friday 19 September 2025 11:28:26 +0000 (0:00:03.952) 0:01:02.563 ****** 2025-09-19 11:28:39.731935 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:28:39.731941 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:28:39.731947 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:28:39.731953 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:28:39.731960 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:28:39.731966 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:28:39.731972 | orchestrator | 2025-09-19 11:28:39.731978 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:28:39.731984 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:28:39.731994 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:28:39.732001 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:28:39.732007 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:28:39.732014 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:28:39.732020 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:28:39.732026 | orchestrator | 2025-09-19 11:28:39.732032 | orchestrator | 2025-09-19 11:28:39.732038 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:28:39.732044 | orchestrator | Friday 19 September 2025 11:28:36 +0000 (0:00:09.982) 0:01:12.546 ****** 2025-09-19 11:28:39.732051 | orchestrator | =============================================================================== 2025-09-19 11:28:39.732057 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.82s 2025-09-19 11:28:39.732063 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.04s 2025-09-19 11:28:39.732069 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.95s 2025-09-19 11:28:39.732075 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 4.07s 2025-09-19 11:28:39.732081 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.95s 2025-09-19 11:28:39.732087 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.24s 2025-09-19 11:28:39.732093 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.81s 2025-09-19 11:28:39.732099 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.71s 2025-09-19 11:28:39.732110 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.40s 2025-09-19 11:28:39.732116 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.17s 2025-09-19 11:28:39.732122 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.04s 2025-09-19 11:28:39.732128 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.01s 2025-09-19 11:28:39.732134 | orchestrator | module-load : Load modules ---------------------------------------------- 1.55s 2025-09-19 11:28:39.732140 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.42s 2025-09-19 11:28:39.732146 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.42s 2025-09-19 11:28:39.732152 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.06s 2025-09-19 11:28:39.732158 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2025-09-19 11:28:39.732165 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2025-09-19 11:28:39.732646 | orchestrator | 2025-09-19 11:28:39 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:39.732658 | orchestrator | 2025-09-19 11:28:39 | INFO  | Task 390a7b81-7431-4dc6-ab2d-ccd595cff53a is in state SUCCESS 2025-09-19 11:28:39.732664 | orchestrator | 2025-09-19 11:28:39 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:39.732671 | orchestrator | 2025-09-19 11:28:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:42.779814 | orchestrator | 2025-09-19 11:28:42 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:42.782989 | orchestrator | 2025-09-19 11:28:42 | INFO  | Task 6fcda122-37b4-4511-bf91-8a92acb01124 is in state STARTED 2025-09-19 11:28:42.785167 | orchestrator | 2025-09-19 11:28:42 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:28:42.786085 | orchestrator | 2025-09-19 11:28:42 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:42.787168 | orchestrator | 2025-09-19 11:28:42 | INFO  | Task 4158efff-ae24-429c-9003-b75a51b50549 is in state STARTED 2025-09-19 11:28:42.794425 | orchestrator | 2025-09-19 11:28:42 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:42.794517 | orchestrator | 2025-09-19 11:28:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:45.826486 | orchestrator | 2025-09-19 11:28:45 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:45.826870 | orchestrator | 2025-09-19 11:28:45 | INFO  | Task 6fcda122-37b4-4511-bf91-8a92acb01124 is in state STARTED 2025-09-19 11:28:45.827645 | orchestrator | 2025-09-19 11:28:45 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:28:45.829109 | orchestrator | 2025-09-19 11:28:45 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:45.829748 | orchestrator | 2025-09-19 11:28:45 | INFO  | Task 4158efff-ae24-429c-9003-b75a51b50549 is in state STARTED 2025-09-19 11:28:45.832687 | orchestrator | 2025-09-19 11:28:45 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:45.832744 | orchestrator | 2025-09-19 11:28:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:49.001832 | orchestrator | 2025-09-19 11:28:48 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:49.001917 | orchestrator | 2025-09-19 11:28:48 | INFO  | Task 6fcda122-37b4-4511-bf91-8a92acb01124 is in state STARTED 2025-09-19 11:28:49.001932 | orchestrator | 2025-09-19 11:28:48 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:28:49.001970 | orchestrator | 2025-09-19 11:28:48 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:49.001982 | orchestrator | 2025-09-19 11:28:48 | INFO  | Task 4158efff-ae24-429c-9003-b75a51b50549 is in state SUCCESS 2025-09-19 11:28:49.001992 | orchestrator | 2025-09-19 11:28:48 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:49.002003 | orchestrator | 2025-09-19 11:28:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:51.920643 | orchestrator | 2025-09-19 11:28:51 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:51.920972 | orchestrator | 2025-09-19 11:28:51 | INFO  | Task 6fcda122-37b4-4511-bf91-8a92acb01124 is in state SUCCESS 2025-09-19 11:28:51.921398 | orchestrator | 2025-09-19 11:28:51 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:28:51.922495 | orchestrator | 2025-09-19 11:28:51 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:51.923207 | orchestrator | 2025-09-19 11:28:51 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:51.923222 | orchestrator | 2025-09-19 11:28:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:54.968537 | orchestrator | 2025-09-19 11:28:54 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:54.969337 | orchestrator | 2025-09-19 11:28:54 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:28:54.970479 | orchestrator | 2025-09-19 11:28:54 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:54.971843 | orchestrator | 2025-09-19 11:28:54 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:54.971868 | orchestrator | 2025-09-19 11:28:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:28:58.035950 | orchestrator | 2025-09-19 11:28:58 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:28:58.036168 | orchestrator | 2025-09-19 11:28:58 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:28:58.037096 | orchestrator | 2025-09-19 11:28:58 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:28:58.038197 | orchestrator | 2025-09-19 11:28:58 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:28:58.038223 | orchestrator | 2025-09-19 11:28:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:01.358769 | orchestrator | 2025-09-19 11:29:01 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:01.361034 | orchestrator | 2025-09-19 11:29:01 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:01.362466 | orchestrator | 2025-09-19 11:29:01 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:01.364135 | orchestrator | 2025-09-19 11:29:01 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:01.364200 | orchestrator | 2025-09-19 11:29:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:04.416337 | orchestrator | 2025-09-19 11:29:04 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:04.420638 | orchestrator | 2025-09-19 11:29:04 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:04.422226 | orchestrator | 2025-09-19 11:29:04 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:04.424638 | orchestrator | 2025-09-19 11:29:04 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:04.424948 | orchestrator | 2025-09-19 11:29:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:07.472027 | orchestrator | 2025-09-19 11:29:07 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:07.473586 | orchestrator | 2025-09-19 11:29:07 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:07.476216 | orchestrator | 2025-09-19 11:29:07 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:07.478623 | orchestrator | 2025-09-19 11:29:07 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:07.479100 | orchestrator | 2025-09-19 11:29:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:10.522006 | orchestrator | 2025-09-19 11:29:10 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:10.523509 | orchestrator | 2025-09-19 11:29:10 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:10.524878 | orchestrator | 2025-09-19 11:29:10 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:10.526357 | orchestrator | 2025-09-19 11:29:10 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:10.526453 | orchestrator | 2025-09-19 11:29:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:13.570551 | orchestrator | 2025-09-19 11:29:13 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:13.570995 | orchestrator | 2025-09-19 11:29:13 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:13.572011 | orchestrator | 2025-09-19 11:29:13 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:13.573582 | orchestrator | 2025-09-19 11:29:13 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:13.573604 | orchestrator | 2025-09-19 11:29:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:16.616129 | orchestrator | 2025-09-19 11:29:16 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:16.618451 | orchestrator | 2025-09-19 11:29:16 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:16.618480 | orchestrator | 2025-09-19 11:29:16 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:16.618485 | orchestrator | 2025-09-19 11:29:16 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:16.618490 | orchestrator | 2025-09-19 11:29:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:19.665196 | orchestrator | 2025-09-19 11:29:19 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:19.667651 | orchestrator | 2025-09-19 11:29:19 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:19.670512 | orchestrator | 2025-09-19 11:29:19 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:19.671931 | orchestrator | 2025-09-19 11:29:19 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:19.671963 | orchestrator | 2025-09-19 11:29:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:22.731678 | orchestrator | 2025-09-19 11:29:22 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:22.731774 | orchestrator | 2025-09-19 11:29:22 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:22.734734 | orchestrator | 2025-09-19 11:29:22 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:22.737658 | orchestrator | 2025-09-19 11:29:22 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:22.740028 | orchestrator | 2025-09-19 11:29:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:25.783782 | orchestrator | 2025-09-19 11:29:25 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:25.786541 | orchestrator | 2025-09-19 11:29:25 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:25.789308 | orchestrator | 2025-09-19 11:29:25 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:25.791816 | orchestrator | 2025-09-19 11:29:25 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:25.791978 | orchestrator | 2025-09-19 11:29:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:28.837986 | orchestrator | 2025-09-19 11:29:28 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:28.838464 | orchestrator | 2025-09-19 11:29:28 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:28.839420 | orchestrator | 2025-09-19 11:29:28 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:28.840499 | orchestrator | 2025-09-19 11:29:28 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:28.840522 | orchestrator | 2025-09-19 11:29:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:31.894693 | orchestrator | 2025-09-19 11:29:31 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:31.894793 | orchestrator | 2025-09-19 11:29:31 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:31.895210 | orchestrator | 2025-09-19 11:29:31 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:31.897266 | orchestrator | 2025-09-19 11:29:31 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:31.897335 | orchestrator | 2025-09-19 11:29:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:34.953602 | orchestrator | 2025-09-19 11:29:34 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:34.953697 | orchestrator | 2025-09-19 11:29:34 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:34.954794 | orchestrator | 2025-09-19 11:29:34 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:34.956093 | orchestrator | 2025-09-19 11:29:34 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:34.956117 | orchestrator | 2025-09-19 11:29:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:37.998760 | orchestrator | 2025-09-19 11:29:37 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:37.999667 | orchestrator | 2025-09-19 11:29:38 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:38.000111 | orchestrator | 2025-09-19 11:29:38 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:38.000765 | orchestrator | 2025-09-19 11:29:38 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:38.000796 | orchestrator | 2025-09-19 11:29:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:41.040175 | orchestrator | 2025-09-19 11:29:41 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:41.040493 | orchestrator | 2025-09-19 11:29:41 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:41.041127 | orchestrator | 2025-09-19 11:29:41 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:41.041938 | orchestrator | 2025-09-19 11:29:41 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:41.041970 | orchestrator | 2025-09-19 11:29:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:44.201025 | orchestrator | 2025-09-19 11:29:44 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:44.202845 | orchestrator | 2025-09-19 11:29:44 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:44.204455 | orchestrator | 2025-09-19 11:29:44 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:44.205866 | orchestrator | 2025-09-19 11:29:44 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:44.205902 | orchestrator | 2025-09-19 11:29:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:47.245811 | orchestrator | 2025-09-19 11:29:47 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:47.245902 | orchestrator | 2025-09-19 11:29:47 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:47.246518 | orchestrator | 2025-09-19 11:29:47 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:47.247937 | orchestrator | 2025-09-19 11:29:47 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:47.247955 | orchestrator | 2025-09-19 11:29:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:50.378295 | orchestrator | 2025-09-19 11:29:50 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:50.378429 | orchestrator | 2025-09-19 11:29:50 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:50.378443 | orchestrator | 2025-09-19 11:29:50 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:50.378455 | orchestrator | 2025-09-19 11:29:50 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:50.378465 | orchestrator | 2025-09-19 11:29:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:53.306862 | orchestrator | 2025-09-19 11:29:53 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:53.307055 | orchestrator | 2025-09-19 11:29:53 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:53.308154 | orchestrator | 2025-09-19 11:29:53 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:53.312149 | orchestrator | 2025-09-19 11:29:53 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:53.312204 | orchestrator | 2025-09-19 11:29:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:56.347205 | orchestrator | 2025-09-19 11:29:56 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:56.347415 | orchestrator | 2025-09-19 11:29:56 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:56.348137 | orchestrator | 2025-09-19 11:29:56 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:56.348635 | orchestrator | 2025-09-19 11:29:56 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:56.348663 | orchestrator | 2025-09-19 11:29:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:29:59.377348 | orchestrator | 2025-09-19 11:29:59 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:29:59.379536 | orchestrator | 2025-09-19 11:29:59 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:29:59.381891 | orchestrator | 2025-09-19 11:29:59 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:29:59.384130 | orchestrator | 2025-09-19 11:29:59 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:29:59.384164 | orchestrator | 2025-09-19 11:29:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:02.428849 | orchestrator | 2025-09-19 11:30:02 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:30:02.431395 | orchestrator | 2025-09-19 11:30:02 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:02.433422 | orchestrator | 2025-09-19 11:30:02 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:02.435349 | orchestrator | 2025-09-19 11:30:02 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:02.435607 | orchestrator | 2025-09-19 11:30:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:05.463255 | orchestrator | 2025-09-19 11:30:05 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:30:05.463451 | orchestrator | 2025-09-19 11:30:05 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:05.464322 | orchestrator | 2025-09-19 11:30:05 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:05.464882 | orchestrator | 2025-09-19 11:30:05 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:05.465028 | orchestrator | 2025-09-19 11:30:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:08.509508 | orchestrator | 2025-09-19 11:30:08 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:30:08.510893 | orchestrator | 2025-09-19 11:30:08 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:08.511481 | orchestrator | 2025-09-19 11:30:08 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:08.512245 | orchestrator | 2025-09-19 11:30:08 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:08.512313 | orchestrator | 2025-09-19 11:30:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:11.543337 | orchestrator | 2025-09-19 11:30:11 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state STARTED 2025-09-19 11:30:11.544983 | orchestrator | 2025-09-19 11:30:11 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:11.547753 | orchestrator | 2025-09-19 11:30:11 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:11.549709 | orchestrator | 2025-09-19 11:30:11 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:11.549875 | orchestrator | 2025-09-19 11:30:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:14.586855 | orchestrator | 2025-09-19 11:30:14 | INFO  | Task d635b041-af12-495b-8992-32c25f8a9cc0 is in state SUCCESS 2025-09-19 11:30:14.588376 | orchestrator | 2025-09-19 11:30:14.588420 | orchestrator | 2025-09-19 11:30:14.588454 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-19 11:30:14.588465 | orchestrator | 2025-09-19 11:30:14.588475 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 11:30:14.588485 | orchestrator | Friday 19 September 2025 11:28:44 +0000 (0:00:00.186) 0:00:00.186 ****** 2025-09-19 11:30:14.588495 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 11:30:14.588504 | orchestrator | 2025-09-19 11:30:14.588514 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 11:30:14.588523 | orchestrator | Friday 19 September 2025 11:28:44 +0000 (0:00:00.746) 0:00:00.932 ****** 2025-09-19 11:30:14.588533 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:14.588543 | orchestrator | 2025-09-19 11:30:14.588553 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-19 11:30:14.588562 | orchestrator | Friday 19 September 2025 11:28:46 +0000 (0:00:01.338) 0:00:02.270 ****** 2025-09-19 11:30:14.588571 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:14.588581 | orchestrator | 2025-09-19 11:30:14.588590 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:30:14.588600 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:14.588610 | orchestrator | 2025-09-19 11:30:14.588620 | orchestrator | 2025-09-19 11:30:14.588629 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:30:14.588639 | orchestrator | Friday 19 September 2025 11:28:47 +0000 (0:00:01.224) 0:00:03.494 ****** 2025-09-19 11:30:14.588648 | orchestrator | =============================================================================== 2025-09-19 11:30:14.588658 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.34s 2025-09-19 11:30:14.588667 | orchestrator | Change server address in the kubeconfig file ---------------------------- 1.22s 2025-09-19 11:30:14.588676 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.75s 2025-09-19 11:30:14.588686 | orchestrator | 2025-09-19 11:30:14.588695 | orchestrator | 2025-09-19 11:30:14.588705 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-19 11:30:14.588714 | orchestrator | 2025-09-19 11:30:14.588724 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-19 11:30:14.588733 | orchestrator | Friday 19 September 2025 11:28:43 +0000 (0:00:00.180) 0:00:00.180 ****** 2025-09-19 11:30:14.588742 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:14.588752 | orchestrator | 2025-09-19 11:30:14.588762 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-19 11:30:14.588772 | orchestrator | Friday 19 September 2025 11:28:44 +0000 (0:00:00.608) 0:00:00.789 ****** 2025-09-19 11:30:14.588781 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:14.588791 | orchestrator | 2025-09-19 11:30:14.588801 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 11:30:14.588810 | orchestrator | Friday 19 September 2025 11:28:45 +0000 (0:00:00.496) 0:00:01.285 ****** 2025-09-19 11:30:14.588819 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 11:30:14.588829 | orchestrator | 2025-09-19 11:30:14.588838 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 11:30:14.588848 | orchestrator | Friday 19 September 2025 11:28:45 +0000 (0:00:00.812) 0:00:02.098 ****** 2025-09-19 11:30:14.588857 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:14.588867 | orchestrator | 2025-09-19 11:30:14.588876 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-19 11:30:14.588885 | orchestrator | Friday 19 September 2025 11:28:47 +0000 (0:00:01.742) 0:00:03.840 ****** 2025-09-19 11:30:14.588895 | orchestrator | changed: [testbed-manager] 2025-09-19 11:30:14.588904 | orchestrator | 2025-09-19 11:30:14.588914 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-19 11:30:14.588923 | orchestrator | Friday 19 September 2025 11:28:48 +0000 (0:00:01.082) 0:00:04.923 ****** 2025-09-19 11:30:14.588939 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 11:30:14.588949 | orchestrator | 2025-09-19 11:30:14.588958 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-19 11:30:14.588968 | orchestrator | Friday 19 September 2025 11:28:50 +0000 (0:00:01.447) 0:00:06.370 ****** 2025-09-19 11:30:14.588977 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 11:30:14.588987 | orchestrator | 2025-09-19 11:30:14.588996 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-19 11:30:14.589006 | orchestrator | Friday 19 September 2025 11:28:50 +0000 (0:00:00.724) 0:00:07.095 ****** 2025-09-19 11:30:14.589015 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:14.589025 | orchestrator | 2025-09-19 11:30:14.589034 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-19 11:30:14.589043 | orchestrator | Friday 19 September 2025 11:28:51 +0000 (0:00:00.442) 0:00:07.538 ****** 2025-09-19 11:30:14.589053 | orchestrator | ok: [testbed-manager] 2025-09-19 11:30:14.589062 | orchestrator | 2025-09-19 11:30:14.589071 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:30:14.589094 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:30:14.589105 | orchestrator | 2025-09-19 11:30:14.589122 | orchestrator | 2025-09-19 11:30:14.589138 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:30:14.589155 | orchestrator | Friday 19 September 2025 11:28:51 +0000 (0:00:00.282) 0:00:07.820 ****** 2025-09-19 11:30:14.589173 | orchestrator | =============================================================================== 2025-09-19 11:30:14.589235 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.74s 2025-09-19 11:30:14.589247 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.45s 2025-09-19 11:30:14.589277 | orchestrator | Change server address in the kubeconfig --------------------------------- 1.08s 2025-09-19 11:30:14.589301 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.81s 2025-09-19 11:30:14.589311 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.72s 2025-09-19 11:30:14.589320 | orchestrator | Get home directory of operator user ------------------------------------- 0.61s 2025-09-19 11:30:14.589330 | orchestrator | Create .kube directory -------------------------------------------------- 0.50s 2025-09-19 11:30:14.589339 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.44s 2025-09-19 11:30:14.589348 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.28s 2025-09-19 11:30:14.589357 | orchestrator | 2025-09-19 11:30:14.589367 | orchestrator | 2025-09-19 11:30:14.589381 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-19 11:30:14.589397 | orchestrator | 2025-09-19 11:30:14.589413 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-19 11:30:14.589428 | orchestrator | Friday 19 September 2025 11:27:46 +0000 (0:00:00.130) 0:00:00.130 ****** 2025-09-19 11:30:14.589437 | orchestrator | ok: [localhost] => { 2025-09-19 11:30:14.589447 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-19 11:30:14.589457 | orchestrator | } 2025-09-19 11:30:14.589466 | orchestrator | 2025-09-19 11:30:14.589476 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-19 11:30:14.589485 | orchestrator | Friday 19 September 2025 11:27:46 +0000 (0:00:00.041) 0:00:00.172 ****** 2025-09-19 11:30:14.589495 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-19 11:30:14.589505 | orchestrator | ...ignoring 2025-09-19 11:30:14.589515 | orchestrator | 2025-09-19 11:30:14.589524 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-19 11:30:14.589542 | orchestrator | Friday 19 September 2025 11:27:50 +0000 (0:00:04.481) 0:00:04.654 ****** 2025-09-19 11:30:14.589552 | orchestrator | skipping: [localhost] 2025-09-19 11:30:14.589561 | orchestrator | 2025-09-19 11:30:14.589570 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-19 11:30:14.589580 | orchestrator | Friday 19 September 2025 11:27:50 +0000 (0:00:00.154) 0:00:04.809 ****** 2025-09-19 11:30:14.589589 | orchestrator | ok: [localhost] 2025-09-19 11:30:14.589598 | orchestrator | 2025-09-19 11:30:14.589608 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:30:14.589617 | orchestrator | 2025-09-19 11:30:14.589627 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:30:14.589636 | orchestrator | Friday 19 September 2025 11:27:51 +0000 (0:00:00.216) 0:00:05.026 ****** 2025-09-19 11:30:14.589645 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:30:14.589655 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:30:14.589664 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:30:14.589673 | orchestrator | 2025-09-19 11:30:14.589683 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:30:14.589692 | orchestrator | Friday 19 September 2025 11:27:51 +0000 (0:00:00.318) 0:00:05.344 ****** 2025-09-19 11:30:14.589702 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-19 11:30:14.589711 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-19 11:30:14.589721 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-19 11:30:14.589730 | orchestrator | 2025-09-19 11:30:14.589740 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-19 11:30:14.589750 | orchestrator | 2025-09-19 11:30:14.589759 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 11:30:14.589769 | orchestrator | Friday 19 September 2025 11:27:51 +0000 (0:00:00.499) 0:00:05.844 ****** 2025-09-19 11:30:14.589778 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:30:14.589787 | orchestrator | 2025-09-19 11:30:14.589797 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-19 11:30:14.589806 | orchestrator | Friday 19 September 2025 11:27:52 +0000 (0:00:00.585) 0:00:06.429 ****** 2025-09-19 11:30:14.589816 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:30:14.589825 | orchestrator | 2025-09-19 11:30:14.589834 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-19 11:30:14.589844 | orchestrator | Friday 19 September 2025 11:27:53 +0000 (0:00:01.009) 0:00:07.439 ****** 2025-09-19 11:30:14.589853 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:30:14.589862 | orchestrator | 2025-09-19 11:30:14.589872 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-19 11:30:14.589881 | orchestrator | Friday 19 September 2025 11:27:53 +0000 (0:00:00.334) 0:00:07.773 ****** 2025-09-19 11:30:14.589890 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:30:14.589900 | orchestrator | 2025-09-19 11:30:14.589909 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-19 11:30:14.589918 | orchestrator | Friday 19 September 2025 11:27:54 +0000 (0:00:00.498) 0:00:08.271 ****** 2025-09-19 11:30:14.589928 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:30:14.589937 | orchestrator | 2025-09-19 11:30:14.589947 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-19 11:30:14.589962 | orchestrator | Friday 19 September 2025 11:27:54 +0000 (0:00:00.389) 0:00:08.660 ****** 2025-09-19 11:30:14.589971 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:30:14.589981 | orchestrator | 2025-09-19 11:30:14.589990 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 11:30:14.589999 | orchestrator | Friday 19 September 2025 11:27:55 +0000 (0:00:00.817) 0:00:09.478 ****** 2025-09-19 11:30:14.590009 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:30:14.590075 | orchestrator | 2025-09-19 11:30:14.590088 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-19 11:30:14.590105 | orchestrator | Friday 19 September 2025 11:27:56 +0000 (0:00:01.112) 0:00:10.590 ****** 2025-09-19 11:30:14.590115 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:30:14.590125 | orchestrator | 2025-09-19 11:30:14.590164 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-19 11:30:14.590183 | orchestrator | Friday 19 September 2025 11:27:57 +0000 (0:00:00.917) 0:00:11.508 ****** 2025-09-19 11:30:14.590200 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:30:14.590218 | orchestrator | 2025-09-19 11:30:14.590231 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-19 11:30:14.590240 | orchestrator | Friday 19 September 2025 11:27:58 +0000 (0:00:00.474) 0:00:11.982 ****** 2025-09-19 11:30:14.590249 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:30:14.590279 | orchestrator | 2025-09-19 11:30:14.590290 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-19 11:30:14.590299 | orchestrator | Friday 19 September 2025 11:27:59 +0000 (0:00:01.170) 0:00:13.159 ****** 2025-09-19 11:30:14.590313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:30:14.590328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:30:14.590345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:30:14.590363 | orchestrator | 2025-09-19 11:30:14.590373 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-19 11:30:14.590382 | orchestrator | Friday 19 September 2025 11:28:01 +0000 (0:00:01.862) 0:00:15.021 ****** 2025-09-19 11:30:14.590401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:30:14.590413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:30:14.590424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:30:14.590434 | orchestrator | 2025-09-19 11:30:14.590444 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-19 11:30:14.590453 | orchestrator | Friday 19 September 2025 11:28:05 +0000 (0:00:04.358) 0:00:19.380 ****** 2025-09-19 11:30:14.590468 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 11:30:14.590478 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 11:30:14.590491 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 11:30:14.590500 | orchestrator | 2025-09-19 11:30:14.590510 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-19 11:30:14.590519 | orchestrator | Friday 19 September 2025 11:28:07 +0000 (0:00:01.738) 0:00:21.119 ****** 2025-09-19 11:30:14.590528 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 11:30:14.590538 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 11:30:14.590547 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 11:30:14.590556 | orchestrator | 2025-09-19 11:30:14.590570 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-19 11:30:14.590580 | orchestrator | Friday 19 September 2025 11:28:09 +0000 (0:00:02.486) 0:00:23.605 ****** 2025-09-19 11:30:14.590589 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 11:30:14.590599 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 11:30:14.590608 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 11:30:14.590617 | orchestrator | 2025-09-19 11:30:14.590627 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-19 11:30:14.590636 | orchestrator | Friday 19 September 2025 11:28:11 +0000 (0:00:01.704) 0:00:25.310 ****** 2025-09-19 11:30:14.590646 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 11:30:14.590655 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 11:30:14.590664 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 11:30:14.590674 | orchestrator | 2025-09-19 11:30:14.590683 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-19 11:30:14.590693 | orchestrator | Friday 19 September 2025 11:28:14 +0000 (0:00:02.731) 0:00:28.042 ****** 2025-09-19 11:30:14.590702 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 11:30:14.590711 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 11:30:14.590720 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 11:30:14.590730 | orchestrator | 2025-09-19 11:30:14.590739 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-19 11:30:14.590749 | orchestrator | Friday 19 September 2025 11:28:16 +0000 (0:00:01.820) 0:00:29.862 ****** 2025-09-19 11:30:14.590758 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 11:30:14.590767 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 11:30:14.590777 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 11:30:14.590786 | orchestrator | 2025-09-19 11:30:14.590795 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 11:30:14.590805 | orchestrator | Friday 19 September 2025 11:28:17 +0000 (0:00:01.923) 0:00:31.786 ****** 2025-09-19 11:30:14.590814 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:30:14.590823 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:30:14.590833 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:30:14.590848 | orchestrator | 2025-09-19 11:30:14.590857 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-19 11:30:14.590867 | orchestrator | Friday 19 September 2025 11:28:18 +0000 (0:00:00.427) 0:00:32.213 ****** 2025-09-19 11:30:14.590877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:30:14.590899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:30:14.590911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:30:14.590922 | orchestrator | 2025-09-19 11:30:14.590931 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-19 11:30:14.590940 | orchestrator | Friday 19 September 2025 11:28:20 +0000 (0:00:02.473) 0:00:34.686 ****** 2025-09-19 11:30:14.590950 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:30:14.590959 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:30:14.590968 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:30:14.590977 | orchestrator | 2025-09-19 11:30:14.590987 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-19 11:30:14.591001 | orchestrator | Friday 19 September 2025 11:28:22 +0000 (0:00:01.278) 0:00:35.965 ****** 2025-09-19 11:30:14.591053 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:30:14.591064 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:30:14.591073 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:30:14.591083 | orchestrator | 2025-09-19 11:30:14.591092 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-19 11:30:14.591102 | orchestrator | Friday 19 September 2025 11:28:30 +0000 (0:00:08.874) 0:00:44.839 ****** 2025-09-19 11:30:14.591111 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:30:14.591121 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:30:14.591130 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:30:14.591139 | orchestrator | 2025-09-19 11:30:14.591148 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 11:30:14.591158 | orchestrator | 2025-09-19 11:30:14.591167 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 11:30:14.591179 | orchestrator | Friday 19 September 2025 11:28:31 +0000 (0:00:00.396) 0:00:45.236 ****** 2025-09-19 11:30:14.591195 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:30:14.591212 | orchestrator | 2025-09-19 11:30:14.591229 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 11:30:14.591247 | orchestrator | Friday 19 September 2025 11:28:32 +0000 (0:00:01.015) 0:00:46.251 ****** 2025-09-19 11:30:14.591395 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:30:14.591419 | orchestrator | 2025-09-19 11:30:14.591429 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 11:30:14.591439 | orchestrator | Friday 19 September 2025 11:28:32 +0000 (0:00:00.506) 0:00:46.758 ****** 2025-09-19 11:30:14.591448 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:30:14.591457 | orchestrator | 2025-09-19 11:30:14.591466 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 11:30:14.591476 | orchestrator | Friday 19 September 2025 11:28:34 +0000 (0:00:01.963) 0:00:48.722 ****** 2025-09-19 11:30:14.591485 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:30:14.591495 | orchestrator | 2025-09-19 11:30:14.591504 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 11:30:14.591514 | orchestrator | 2025-09-19 11:30:14.591523 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 11:30:14.591533 | orchestrator | Friday 19 September 2025 11:29:30 +0000 (0:00:55.726) 0:01:44.448 ****** 2025-09-19 11:30:14.591542 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:30:14.591551 | orchestrator | 2025-09-19 11:30:14.591561 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 11:30:14.591570 | orchestrator | Friday 19 September 2025 11:29:31 +0000 (0:00:00.594) 0:01:45.042 ****** 2025-09-19 11:30:14.591579 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:30:14.591588 | orchestrator | 2025-09-19 11:30:14.591606 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 11:30:14.591616 | orchestrator | Friday 19 September 2025 11:29:31 +0000 (0:00:00.499) 0:01:45.542 ****** 2025-09-19 11:30:14.591625 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:30:14.591634 | orchestrator | 2025-09-19 11:30:14.591643 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 11:30:14.591653 | orchestrator | Friday 19 September 2025 11:29:38 +0000 (0:00:07.099) 0:01:52.642 ****** 2025-09-19 11:30:14.591662 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:30:14.591671 | orchestrator | 2025-09-19 11:30:14.591681 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 11:30:14.591690 | orchestrator | 2025-09-19 11:30:14.591699 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 11:30:14.591720 | orchestrator | Friday 19 September 2025 11:29:50 +0000 (0:00:11.587) 0:02:04.229 ****** 2025-09-19 11:30:14.591730 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:30:14.591739 | orchestrator | 2025-09-19 11:30:14.591764 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 11:30:14.591774 | orchestrator | Friday 19 September 2025 11:29:51 +0000 (0:00:00.674) 0:02:04.903 ****** 2025-09-19 11:30:14.591783 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:30:14.591792 | orchestrator | 2025-09-19 11:30:14.591800 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 11:30:14.591808 | orchestrator | Friday 19 September 2025 11:29:51 +0000 (0:00:00.225) 0:02:05.129 ****** 2025-09-19 11:30:14.591816 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:30:14.591823 | orchestrator | 2025-09-19 11:30:14.591831 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 11:30:14.591839 | orchestrator | Friday 19 September 2025 11:29:52 +0000 (0:00:01.576) 0:02:06.705 ****** 2025-09-19 11:30:14.591846 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:30:14.591854 | orchestrator | 2025-09-19 11:30:14.591862 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-19 11:30:14.591869 | orchestrator | 2025-09-19 11:30:14.591877 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-19 11:30:14.591884 | orchestrator | Friday 19 September 2025 11:30:09 +0000 (0:00:16.536) 0:02:23.241 ****** 2025-09-19 11:30:14.591892 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:30:14.591899 | orchestrator | 2025-09-19 11:30:14.591907 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-19 11:30:14.591915 | orchestrator | Friday 19 September 2025 11:30:10 +0000 (0:00:00.715) 0:02:23.957 ****** 2025-09-19 11:30:14.591922 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 11:30:14.591930 | orchestrator | enable_outward_rabbitmq_True 2025-09-19 11:30:14.591937 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 11:30:14.591945 | orchestrator | outward_rabbitmq_restart 2025-09-19 11:30:14.591953 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:30:14.591960 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:30:14.591968 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:30:14.591975 | orchestrator | 2025-09-19 11:30:14.591983 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-19 11:30:14.591991 | orchestrator | skipping: no hosts matched 2025-09-19 11:30:14.591998 | orchestrator | 2025-09-19 11:30:14.592006 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-19 11:30:14.592014 | orchestrator | skipping: no hosts matched 2025-09-19 11:30:14.592021 | orchestrator | 2025-09-19 11:30:14.592029 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-19 11:30:14.592037 | orchestrator | skipping: no hosts matched 2025-09-19 11:30:14.592044 | orchestrator | 2025-09-19 11:30:14.592052 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:30:14.592060 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-19 11:30:14.592069 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 11:30:14.592077 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:30:14.592084 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:30:14.592092 | orchestrator | 2025-09-19 11:30:14.592100 | orchestrator | 2025-09-19 11:30:14.592108 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:30:14.592115 | orchestrator | Friday 19 September 2025 11:30:12 +0000 (0:00:02.727) 0:02:26.685 ****** 2025-09-19 11:30:14.592123 | orchestrator | =============================================================================== 2025-09-19 11:30:14.592137 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.85s 2025-09-19 11:30:14.592145 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.64s 2025-09-19 11:30:14.592153 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.87s 2025-09-19 11:30:14.592160 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.48s 2025-09-19 11:30:14.592168 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 4.36s 2025-09-19 11:30:14.592175 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.73s 2025-09-19 11:30:14.592183 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.73s 2025-09-19 11:30:14.592191 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.49s 2025-09-19 11:30:14.592198 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.47s 2025-09-19 11:30:14.592209 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.28s 2025-09-19 11:30:14.592217 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.92s 2025-09-19 11:30:14.592225 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.86s 2025-09-19 11:30:14.592232 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.82s 2025-09-19 11:30:14.592240 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.74s 2025-09-19 11:30:14.592248 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.70s 2025-09-19 11:30:14.592288 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.28s 2025-09-19 11:30:14.592298 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.23s 2025-09-19 11:30:14.592305 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.18s 2025-09-19 11:30:14.592313 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.11s 2025-09-19 11:30:14.592321 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.01s 2025-09-19 11:30:14.592329 | orchestrator | 2025-09-19 11:30:14 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:14.592416 | orchestrator | 2025-09-19 11:30:14 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:14.592717 | orchestrator | 2025-09-19 11:30:14 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:14.592733 | orchestrator | 2025-09-19 11:30:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:17.638465 | orchestrator | 2025-09-19 11:30:17 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:17.640369 | orchestrator | 2025-09-19 11:30:17 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:17.642517 | orchestrator | 2025-09-19 11:30:17 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:17.642548 | orchestrator | 2025-09-19 11:30:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:20.675958 | orchestrator | 2025-09-19 11:30:20 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:20.676319 | orchestrator | 2025-09-19 11:30:20 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:20.677324 | orchestrator | 2025-09-19 11:30:20 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:20.677345 | orchestrator | 2025-09-19 11:30:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:23.709794 | orchestrator | 2025-09-19 11:30:23 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:23.711128 | orchestrator | 2025-09-19 11:30:23 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:23.712822 | orchestrator | 2025-09-19 11:30:23 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:23.713551 | orchestrator | 2025-09-19 11:30:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:26.748112 | orchestrator | 2025-09-19 11:30:26 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:26.751136 | orchestrator | 2025-09-19 11:30:26 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:26.752568 | orchestrator | 2025-09-19 11:30:26 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:26.752689 | orchestrator | 2025-09-19 11:30:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:29.794992 | orchestrator | 2025-09-19 11:30:29 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:29.797124 | orchestrator | 2025-09-19 11:30:29 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:29.798848 | orchestrator | 2025-09-19 11:30:29 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:29.799130 | orchestrator | 2025-09-19 11:30:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:32.846878 | orchestrator | 2025-09-19 11:30:32 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:32.850153 | orchestrator | 2025-09-19 11:30:32 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:32.850196 | orchestrator | 2025-09-19 11:30:32 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:32.850206 | orchestrator | 2025-09-19 11:30:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:35.888835 | orchestrator | 2025-09-19 11:30:35 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:35.889321 | orchestrator | 2025-09-19 11:30:35 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:35.891180 | orchestrator | 2025-09-19 11:30:35 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:35.891348 | orchestrator | 2025-09-19 11:30:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:38.918703 | orchestrator | 2025-09-19 11:30:38 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:38.919341 | orchestrator | 2025-09-19 11:30:38 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:38.920463 | orchestrator | 2025-09-19 11:30:38 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:38.920486 | orchestrator | 2025-09-19 11:30:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:41.964890 | orchestrator | 2025-09-19 11:30:41 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:41.965508 | orchestrator | 2025-09-19 11:30:41 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:41.967080 | orchestrator | 2025-09-19 11:30:41 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:41.967110 | orchestrator | 2025-09-19 11:30:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:45.016034 | orchestrator | 2025-09-19 11:30:45 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:45.019039 | orchestrator | 2025-09-19 11:30:45 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:45.020176 | orchestrator | 2025-09-19 11:30:45 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:45.020214 | orchestrator | 2025-09-19 11:30:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:48.064824 | orchestrator | 2025-09-19 11:30:48 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:48.064910 | orchestrator | 2025-09-19 11:30:48 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:48.066628 | orchestrator | 2025-09-19 11:30:48 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:48.067271 | orchestrator | 2025-09-19 11:30:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:51.123647 | orchestrator | 2025-09-19 11:30:51 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:51.123748 | orchestrator | 2025-09-19 11:30:51 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:51.123763 | orchestrator | 2025-09-19 11:30:51 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:51.123775 | orchestrator | 2025-09-19 11:30:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:54.166282 | orchestrator | 2025-09-19 11:30:54 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:54.166657 | orchestrator | 2025-09-19 11:30:54 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:54.169120 | orchestrator | 2025-09-19 11:30:54 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:54.169154 | orchestrator | 2025-09-19 11:30:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:30:57.220458 | orchestrator | 2025-09-19 11:30:57 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:30:57.222835 | orchestrator | 2025-09-19 11:30:57 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:30:57.225035 | orchestrator | 2025-09-19 11:30:57 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:30:57.225486 | orchestrator | 2025-09-19 11:30:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:00.283559 | orchestrator | 2025-09-19 11:31:00 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:31:00.285218 | orchestrator | 2025-09-19 11:31:00 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:00.286616 | orchestrator | 2025-09-19 11:31:00 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:00.286846 | orchestrator | 2025-09-19 11:31:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:03.340022 | orchestrator | 2025-09-19 11:31:03 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:31:03.340664 | orchestrator | 2025-09-19 11:31:03 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:03.340702 | orchestrator | 2025-09-19 11:31:03 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:03.340713 | orchestrator | 2025-09-19 11:31:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:06.379373 | orchestrator | 2025-09-19 11:31:06 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:31:06.380232 | orchestrator | 2025-09-19 11:31:06 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:06.381803 | orchestrator | 2025-09-19 11:31:06 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:06.382054 | orchestrator | 2025-09-19 11:31:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:09.420272 | orchestrator | 2025-09-19 11:31:09 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state STARTED 2025-09-19 11:31:09.420382 | orchestrator | 2025-09-19 11:31:09 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:09.421044 | orchestrator | 2025-09-19 11:31:09 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:09.422347 | orchestrator | 2025-09-19 11:31:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:12.504663 | orchestrator | 2025-09-19 11:31:12.504745 | orchestrator | 2025-09-19 11:31:12.504765 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:31:12.504785 | orchestrator | 2025-09-19 11:31:12.504796 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:31:12.504813 | orchestrator | Friday 19 September 2025 11:28:43 +0000 (0:00:00.190) 0:00:00.190 ****** 2025-09-19 11:31:12.504855 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:31:12.504875 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:31:12.504893 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:31:12.504910 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.504929 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.504947 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.504964 | orchestrator | 2025-09-19 11:31:12.504982 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:31:12.505000 | orchestrator | Friday 19 September 2025 11:28:43 +0000 (0:00:00.610) 0:00:00.801 ****** 2025-09-19 11:31:12.505017 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-19 11:31:12.505037 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-19 11:31:12.505052 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-19 11:31:12.505067 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-19 11:31:12.505083 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-19 11:31:12.505101 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-19 11:31:12.505122 | orchestrator | 2025-09-19 11:31:12.505167 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-19 11:31:12.505188 | orchestrator | 2025-09-19 11:31:12.505207 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-19 11:31:12.505226 | orchestrator | Friday 19 September 2025 11:28:44 +0000 (0:00:01.107) 0:00:01.908 ****** 2025-09-19 11:31:12.505246 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:31:12.505266 | orchestrator | 2025-09-19 11:31:12.505286 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-19 11:31:12.505307 | orchestrator | Friday 19 September 2025 11:28:45 +0000 (0:00:01.010) 0:00:02.919 ****** 2025-09-19 11:31:12.505330 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505419 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505530 | orchestrator | 2025-09-19 11:31:12.505549 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-19 11:31:12.505567 | orchestrator | Friday 19 September 2025 11:28:47 +0000 (0:00:01.837) 0:00:04.756 ****** 2025-09-19 11:31:12.505587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505607 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505647 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505727 | orchestrator | 2025-09-19 11:31:12.505745 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-19 11:31:12.505786 | orchestrator | Friday 19 September 2025 11:28:49 +0000 (0:00:01.988) 0:00:06.745 ****** 2025-09-19 11:31:12.505807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505827 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.505937 | orchestrator | 2025-09-19 11:31:12.505958 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-19 11:31:12.505989 | orchestrator | Friday 19 September 2025 11:28:51 +0000 (0:00:01.206) 0:00:07.952 ****** 2025-09-19 11:31:12.506011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.506103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.506134 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.506184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.506207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.506239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.506259 | orchestrator | 2025-09-19 11:31:12.506277 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-19 11:31:12.506295 | orchestrator | Friday 19 September 2025 11:28:52 +0000 (0:00:01.694) 0:00:09.647 ****** 2025-09-19 11:31:12.506313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.506332 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.506366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.506409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.506436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.506455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.506473 | orchestrator | 2025-09-19 11:31:12.506490 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-19 11:31:12.506524 | orchestrator | Friday 19 September 2025 11:28:54 +0000 (0:00:01.695) 0:00:11.343 ****** 2025-09-19 11:31:12.506544 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:31:12.506563 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:31:12.506580 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:12.506614 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:31:12.506634 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:12.506653 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:12.506671 | orchestrator | 2025-09-19 11:31:12.506690 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-19 11:31:12.506707 | orchestrator | Friday 19 September 2025 11:28:56 +0000 (0:00:02.443) 0:00:13.787 ****** 2025-09-19 11:31:12.506726 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-19 11:31:12.506744 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-19 11:31:12.506762 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-19 11:31:12.506794 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-19 11:31:12.506815 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-19 11:31:12.506833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-19 11:31:12.506851 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 11:31:12.506870 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 11:31:12.506888 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 11:31:12.506906 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 11:31:12.506949 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 11:31:12.506968 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 11:31:12.506986 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 11:31:12.507007 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 11:31:12.507026 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 11:31:12.507046 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 11:31:12.507065 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 11:31:12.507122 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 11:31:12.507172 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 11:31:12.507193 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 11:31:12.507212 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 11:31:12.507232 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 11:31:12.507250 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 11:31:12.507268 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 11:31:12.507286 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 11:31:12.507304 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 11:31:12.507344 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 11:31:12.507364 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 11:31:12.507393 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 11:31:12.507412 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 11:31:12.507433 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 11:31:12.507451 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 11:31:12.507471 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 11:31:12.507491 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 11:31:12.507508 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 11:31:12.507528 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 11:31:12.507548 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 11:31:12.507569 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 11:31:12.507587 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 11:31:12.507606 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 11:31:12.507656 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 11:31:12.507678 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 11:31:12.507697 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-19 11:31:12.507718 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-19 11:31:12.507736 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-19 11:31:12.507754 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-19 11:31:12.507773 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-19 11:31:12.507793 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-19 11:31:12.507813 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 11:31:12.507832 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 11:31:12.507852 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 11:31:12.507870 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 11:31:12.507889 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 11:31:12.507908 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 11:31:12.507927 | orchestrator | 2025-09-19 11:31:12.507945 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 11:31:12.507964 | orchestrator | Friday 19 September 2025 11:29:15 +0000 (0:00:19.054) 0:00:32.842 ****** 2025-09-19 11:31:12.507983 | orchestrator | 2025-09-19 11:31:12.508002 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 11:31:12.508021 | orchestrator | Friday 19 September 2025 11:29:15 +0000 (0:00:00.063) 0:00:32.905 ****** 2025-09-19 11:31:12.508041 | orchestrator | 2025-09-19 11:31:12.508060 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 11:31:12.508079 | orchestrator | Friday 19 September 2025 11:29:16 +0000 (0:00:00.071) 0:00:32.976 ****** 2025-09-19 11:31:12.508099 | orchestrator | 2025-09-19 11:31:12.508118 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 11:31:12.508137 | orchestrator | Friday 19 September 2025 11:29:16 +0000 (0:00:00.068) 0:00:33.045 ****** 2025-09-19 11:31:12.508187 | orchestrator | 2025-09-19 11:31:12.508205 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 11:31:12.508224 | orchestrator | Friday 19 September 2025 11:29:16 +0000 (0:00:00.065) 0:00:33.110 ****** 2025-09-19 11:31:12.508243 | orchestrator | 2025-09-19 11:31:12.508281 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 11:31:12.508303 | orchestrator | Friday 19 September 2025 11:29:16 +0000 (0:00:00.063) 0:00:33.174 ****** 2025-09-19 11:31:12.508322 | orchestrator | 2025-09-19 11:31:12.508351 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-19 11:31:12.508372 | orchestrator | Friday 19 September 2025 11:29:16 +0000 (0:00:00.063) 0:00:33.237 ****** 2025-09-19 11:31:12.508425 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:31:12.508447 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.508466 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:31:12.508485 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:31:12.508503 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.508522 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.508542 | orchestrator | 2025-09-19 11:31:12.508562 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-19 11:31:12.508581 | orchestrator | Friday 19 September 2025 11:29:17 +0000 (0:00:01.576) 0:00:34.813 ****** 2025-09-19 11:31:12.508599 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:12.508620 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:31:12.508638 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:12.508656 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:31:12.508674 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:12.508693 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:31:12.508714 | orchestrator | 2025-09-19 11:31:12.508733 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-19 11:31:12.508752 | orchestrator | 2025-09-19 11:31:12.508772 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 11:31:12.508791 | orchestrator | Friday 19 September 2025 11:29:51 +0000 (0:00:33.983) 0:01:08.797 ****** 2025-09-19 11:31:12.508809 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:31:12.508827 | orchestrator | 2025-09-19 11:31:12.508845 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 11:31:12.508863 | orchestrator | Friday 19 September 2025 11:29:52 +0000 (0:00:00.679) 0:01:09.476 ****** 2025-09-19 11:31:12.508881 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:31:12.508901 | orchestrator | 2025-09-19 11:31:12.508933 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-19 11:31:12.508955 | orchestrator | Friday 19 September 2025 11:29:53 +0000 (0:00:00.563) 0:01:10.040 ****** 2025-09-19 11:31:12.508975 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.508994 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.509012 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.509032 | orchestrator | 2025-09-19 11:31:12.509051 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-19 11:31:12.509071 | orchestrator | Friday 19 September 2025 11:29:54 +0000 (0:00:00.996) 0:01:11.037 ****** 2025-09-19 11:31:12.509089 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.509108 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.509127 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.509198 | orchestrator | 2025-09-19 11:31:12.509219 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-19 11:31:12.509237 | orchestrator | Friday 19 September 2025 11:29:54 +0000 (0:00:00.364) 0:01:11.401 ****** 2025-09-19 11:31:12.509256 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.509276 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.509296 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.509315 | orchestrator | 2025-09-19 11:31:12.509334 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-19 11:31:12.509353 | orchestrator | Friday 19 September 2025 11:29:54 +0000 (0:00:00.305) 0:01:11.706 ****** 2025-09-19 11:31:12.509372 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.509392 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.509410 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.509430 | orchestrator | 2025-09-19 11:31:12.509450 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-19 11:31:12.509469 | orchestrator | Friday 19 September 2025 11:29:55 +0000 (0:00:00.311) 0:01:12.018 ****** 2025-09-19 11:31:12.509489 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.509508 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.509544 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.509563 | orchestrator | 2025-09-19 11:31:12.509582 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-19 11:31:12.509602 | orchestrator | Friday 19 September 2025 11:29:55 +0000 (0:00:00.466) 0:01:12.485 ****** 2025-09-19 11:31:12.509621 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.509639 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.509659 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.509677 | orchestrator | 2025-09-19 11:31:12.509695 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-19 11:31:12.509713 | orchestrator | Friday 19 September 2025 11:29:55 +0000 (0:00:00.279) 0:01:12.765 ****** 2025-09-19 11:31:12.509732 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.509749 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.509766 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.509783 | orchestrator | 2025-09-19 11:31:12.509801 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-19 11:31:12.509819 | orchestrator | Friday 19 September 2025 11:29:56 +0000 (0:00:00.387) 0:01:13.153 ****** 2025-09-19 11:31:12.509837 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.509856 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.509874 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.509891 | orchestrator | 2025-09-19 11:31:12.509910 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-19 11:31:12.509928 | orchestrator | Friday 19 September 2025 11:29:56 +0000 (0:00:00.301) 0:01:13.455 ****** 2025-09-19 11:31:12.509945 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.509964 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.510004 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.510066 | orchestrator | 2025-09-19 11:31:12.510078 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-19 11:31:12.510089 | orchestrator | Friday 19 September 2025 11:29:56 +0000 (0:00:00.441) 0:01:13.896 ****** 2025-09-19 11:31:12.510100 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.510125 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.510208 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.510232 | orchestrator | 2025-09-19 11:31:12.510254 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-19 11:31:12.510284 | orchestrator | Friday 19 September 2025 11:29:57 +0000 (0:00:00.284) 0:01:14.180 ****** 2025-09-19 11:31:12.510301 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.510311 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.510321 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.510330 | orchestrator | 2025-09-19 11:31:12.510348 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-19 11:31:12.510365 | orchestrator | Friday 19 September 2025 11:29:57 +0000 (0:00:00.250) 0:01:14.431 ****** 2025-09-19 11:31:12.510375 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.510385 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.510394 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.510410 | orchestrator | 2025-09-19 11:31:12.510427 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-19 11:31:12.510437 | orchestrator | Friday 19 September 2025 11:29:57 +0000 (0:00:00.224) 0:01:14.656 ****** 2025-09-19 11:31:12.510447 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.510456 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.510465 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.510475 | orchestrator | 2025-09-19 11:31:12.510484 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-19 11:31:12.510494 | orchestrator | Friday 19 September 2025 11:29:58 +0000 (0:00:00.420) 0:01:15.076 ****** 2025-09-19 11:31:12.510503 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.510512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.510522 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.510542 | orchestrator | 2025-09-19 11:31:12.510551 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-19 11:31:12.510561 | orchestrator | Friday 19 September 2025 11:29:58 +0000 (0:00:00.300) 0:01:15.376 ****** 2025-09-19 11:31:12.510570 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.510580 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.510589 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.510598 | orchestrator | 2025-09-19 11:31:12.510624 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-19 11:31:12.510643 | orchestrator | Friday 19 September 2025 11:29:58 +0000 (0:00:00.269) 0:01:15.646 ****** 2025-09-19 11:31:12.510653 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.510663 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.510672 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.510681 | orchestrator | 2025-09-19 11:31:12.510690 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-19 11:31:12.510700 | orchestrator | Friday 19 September 2025 11:29:58 +0000 (0:00:00.221) 0:01:15.868 ****** 2025-09-19 11:31:12.510709 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.510719 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.510728 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.510738 | orchestrator | 2025-09-19 11:31:12.510747 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 11:31:12.510756 | orchestrator | Friday 19 September 2025 11:29:59 +0000 (0:00:00.396) 0:01:16.264 ****** 2025-09-19 11:31:12.510766 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:31:12.510776 | orchestrator | 2025-09-19 11:31:12.510786 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-19 11:31:12.510795 | orchestrator | Friday 19 September 2025 11:29:59 +0000 (0:00:00.467) 0:01:16.731 ****** 2025-09-19 11:31:12.510804 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.510814 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.510823 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.510832 | orchestrator | 2025-09-19 11:31:12.510842 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-19 11:31:12.510851 | orchestrator | Friday 19 September 2025 11:30:00 +0000 (0:00:00.361) 0:01:17.093 ****** 2025-09-19 11:31:12.510861 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.510870 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.510879 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.510888 | orchestrator | 2025-09-19 11:31:12.510898 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-19 11:31:12.510907 | orchestrator | Friday 19 September 2025 11:30:00 +0000 (0:00:00.610) 0:01:17.703 ****** 2025-09-19 11:31:12.510916 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.510926 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.510935 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.510944 | orchestrator | 2025-09-19 11:31:12.510954 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-19 11:31:12.510963 | orchestrator | Friday 19 September 2025 11:30:01 +0000 (0:00:00.331) 0:01:18.035 ****** 2025-09-19 11:31:12.510973 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.510982 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.510996 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.511009 | orchestrator | 2025-09-19 11:31:12.511019 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-19 11:31:12.511029 | orchestrator | Friday 19 September 2025 11:30:01 +0000 (0:00:00.326) 0:01:18.362 ****** 2025-09-19 11:31:12.511038 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.511047 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.511057 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.511069 | orchestrator | 2025-09-19 11:31:12.511085 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-19 11:31:12.511101 | orchestrator | Friday 19 September 2025 11:30:01 +0000 (0:00:00.300) 0:01:18.662 ****** 2025-09-19 11:31:12.511110 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.511120 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.511129 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.511138 | orchestrator | 2025-09-19 11:31:12.511167 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-19 11:31:12.511177 | orchestrator | Friday 19 September 2025 11:30:02 +0000 (0:00:00.474) 0:01:19.136 ****** 2025-09-19 11:31:12.511187 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.511196 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.511205 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.511217 | orchestrator | 2025-09-19 11:31:12.511242 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-19 11:31:12.511257 | orchestrator | Friday 19 September 2025 11:30:02 +0000 (0:00:00.293) 0:01:19.430 ****** 2025-09-19 11:31:12.511274 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.511310 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.511324 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.511334 | orchestrator | 2025-09-19 11:31:12.511344 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-19 11:31:12.511353 | orchestrator | Friday 19 September 2025 11:30:02 +0000 (0:00:00.286) 0:01:19.717 ****** 2025-09-19 11:31:12.511365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/k2025-09-19 11:31:12 | INFO  | Task 67a7d4d8-addb-4c82-8d2b-d5beb736cea0 is in state SUCCESS 2025-09-19 11:31:12.511423 | orchestrator | 2025-09-19 11:31:12 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:12.511433 | orchestrator | 2025-09-19 11:31:12 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:12.511442 | orchestrator | 2025-09-19 11:31:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:12.511453 | orchestrator | olla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511538 | orchestrator | 2025-09-19 11:31:12.511553 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-19 11:31:12.511570 | orchestrator | Friday 19 September 2025 11:30:04 +0000 (0:00:01.401) 0:01:21.118 ****** 2025-09-19 11:31:12.511583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511714 | orchestrator | 2025-09-19 11:31:12.511724 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-19 11:31:12.511734 | orchestrator | Friday 19 September 2025 11:30:08 +0000 (0:00:04.132) 0:01:25.251 ****** 2025-09-19 11:31:12.511744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.511845 | orchestrator | 2025-09-19 11:31:12.511863 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 11:31:12.511873 | orchestrator | Friday 19 September 2025 11:30:10 +0000 (0:00:02.287) 0:01:27.538 ****** 2025-09-19 11:31:12.511882 | orchestrator | 2025-09-19 11:31:12.511891 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 11:31:12.511901 | orchestrator | Friday 19 September 2025 11:30:10 +0000 (0:00:00.065) 0:01:27.604 ****** 2025-09-19 11:31:12.511910 | orchestrator | 2025-09-19 11:31:12.511919 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 11:31:12.511929 | orchestrator | Friday 19 September 2025 11:30:10 +0000 (0:00:00.055) 0:01:27.659 ****** 2025-09-19 11:31:12.511938 | orchestrator | 2025-09-19 11:31:12.511947 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-19 11:31:12.511957 | orchestrator | Friday 19 September 2025 11:30:10 +0000 (0:00:00.061) 0:01:27.721 ****** 2025-09-19 11:31:12.511966 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:12.511978 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:12.511996 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:12.512006 | orchestrator | 2025-09-19 11:31:12.512023 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-19 11:31:12.512035 | orchestrator | Friday 19 September 2025 11:30:18 +0000 (0:00:07.444) 0:01:35.166 ****** 2025-09-19 11:31:12.512044 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:12.512053 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:12.512063 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:12.512072 | orchestrator | 2025-09-19 11:31:12.512082 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-19 11:31:12.512091 | orchestrator | Friday 19 September 2025 11:30:24 +0000 (0:00:06.522) 0:01:41.688 ****** 2025-09-19 11:31:12.512108 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:12.512126 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:12.512136 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:12.512164 | orchestrator | 2025-09-19 11:31:12.512180 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-19 11:31:12.512190 | orchestrator | Friday 19 September 2025 11:30:32 +0000 (0:00:07.322) 0:01:49.011 ****** 2025-09-19 11:31:12.512200 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.512209 | orchestrator | 2025-09-19 11:31:12.512222 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-19 11:31:12.512236 | orchestrator | Friday 19 September 2025 11:30:32 +0000 (0:00:00.136) 0:01:49.147 ****** 2025-09-19 11:31:12.512252 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.512264 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.512273 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.512289 | orchestrator | 2025-09-19 11:31:12.512305 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-19 11:31:12.512321 | orchestrator | Friday 19 September 2025 11:30:32 +0000 (0:00:00.781) 0:01:49.929 ****** 2025-09-19 11:31:12.512339 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.512365 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.512384 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:12.512399 | orchestrator | 2025-09-19 11:31:12.512415 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-19 11:31:12.512432 | orchestrator | Friday 19 September 2025 11:30:33 +0000 (0:00:00.599) 0:01:50.528 ****** 2025-09-19 11:31:12.512449 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.512465 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.512480 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.512496 | orchestrator | 2025-09-19 11:31:12.512512 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-19 11:31:12.512528 | orchestrator | Friday 19 September 2025 11:30:34 +0000 (0:00:01.151) 0:01:51.679 ****** 2025-09-19 11:31:12.512562 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.512579 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.512596 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:12.512614 | orchestrator | 2025-09-19 11:31:12.512631 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-19 11:31:12.512648 | orchestrator | Friday 19 September 2025 11:30:35 +0000 (0:00:00.634) 0:01:52.314 ****** 2025-09-19 11:31:12.512664 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.512681 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.512697 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.512713 | orchestrator | 2025-09-19 11:31:12.512731 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-19 11:31:12.512747 | orchestrator | Friday 19 September 2025 11:30:36 +0000 (0:00:00.865) 0:01:53.180 ****** 2025-09-19 11:31:12.512764 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.512781 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.512797 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.512813 | orchestrator | 2025-09-19 11:31:12.512829 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-19 11:31:12.512845 | orchestrator | Friday 19 September 2025 11:30:37 +0000 (0:00:00.783) 0:01:53.963 ****** 2025-09-19 11:31:12.512861 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.512878 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.512894 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.512910 | orchestrator | 2025-09-19 11:31:12.512925 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-19 11:31:12.512942 | orchestrator | Friday 19 September 2025 11:30:37 +0000 (0:00:00.416) 0:01:54.380 ****** 2025-09-19 11:31:12.512959 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.512995 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513011 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513029 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513047 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513077 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513094 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513110 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513127 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513202 | orchestrator | 2025-09-19 11:31:12.513220 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-19 11:31:12.513238 | orchestrator | Friday 19 September 2025 11:30:38 +0000 (0:00:01.300) 0:01:55.681 ****** 2025-09-19 11:31:12.513269 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513298 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513321 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513340 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513403 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513455 | orchestrator | 2025-09-19 11:31:12.513470 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-19 11:31:12.513484 | orchestrator | Friday 19 September 2025 11:30:42 +0000 (0:00:03.730) 0:01:59.411 ****** 2025-09-19 11:31:12.513499 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513522 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513537 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513586 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513638 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:31:12.513666 | orchestrator | 2025-09-19 11:31:12.513680 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 11:31:12.513694 | orchestrator | Friday 19 September 2025 11:30:44 +0000 (0:00:02.389) 0:02:01.801 ****** 2025-09-19 11:31:12.513707 | orchestrator | 2025-09-19 11:31:12.513722 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 11:31:12.513758 | orchestrator | Friday 19 September 2025 11:30:45 +0000 (0:00:00.203) 0:02:02.005 ****** 2025-09-19 11:31:12.513783 | orchestrator | 2025-09-19 11:31:12.513798 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 11:31:12.513812 | orchestrator | Friday 19 September 2025 11:30:45 +0000 (0:00:00.060) 0:02:02.065 ****** 2025-09-19 11:31:12.513825 | orchestrator | 2025-09-19 11:31:12.513839 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-19 11:31:12.513853 | orchestrator | Friday 19 September 2025 11:30:45 +0000 (0:00:00.058) 0:02:02.123 ****** 2025-09-19 11:31:12.513867 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:12.513881 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:12.513895 | orchestrator | 2025-09-19 11:31:12.513909 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-19 11:31:12.513923 | orchestrator | Friday 19 September 2025 11:30:51 +0000 (0:00:06.156) 0:02:08.280 ****** 2025-09-19 11:31:12.513937 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:12.513951 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:12.513965 | orchestrator | 2025-09-19 11:31:12.513979 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-19 11:31:12.513993 | orchestrator | Friday 19 September 2025 11:30:57 +0000 (0:00:06.315) 0:02:14.595 ****** 2025-09-19 11:31:12.514007 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:31:12.514059 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:31:12.514073 | orchestrator | 2025-09-19 11:31:12.514086 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-19 11:31:12.514100 | orchestrator | Friday 19 September 2025 11:31:03 +0000 (0:00:06.253) 0:02:20.849 ****** 2025-09-19 11:31:12.514113 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:31:12.514126 | orchestrator | 2025-09-19 11:31:12.514158 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-19 11:31:12.514173 | orchestrator | Friday 19 September 2025 11:31:04 +0000 (0:00:00.207) 0:02:21.056 ****** 2025-09-19 11:31:12.514187 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.514201 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.514214 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.514227 | orchestrator | 2025-09-19 11:31:12.514240 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-19 11:31:12.514253 | orchestrator | Friday 19 September 2025 11:31:04 +0000 (0:00:00.808) 0:02:21.864 ****** 2025-09-19 11:31:12.514267 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.514281 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.514294 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:12.514307 | orchestrator | 2025-09-19 11:31:12.514321 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-19 11:31:12.514335 | orchestrator | Friday 19 September 2025 11:31:05 +0000 (0:00:00.707) 0:02:22.572 ****** 2025-09-19 11:31:12.514348 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.514362 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.514375 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.514388 | orchestrator | 2025-09-19 11:31:12.514401 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-19 11:31:12.514414 | orchestrator | Friday 19 September 2025 11:31:06 +0000 (0:00:00.833) 0:02:23.406 ****** 2025-09-19 11:31:12.514511 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:31:12.514537 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:31:12.514550 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:31:12.514563 | orchestrator | 2025-09-19 11:31:12.514589 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-19 11:31:12.514603 | orchestrator | Friday 19 September 2025 11:31:07 +0000 (0:00:00.882) 0:02:24.289 ****** 2025-09-19 11:31:12.514617 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.514630 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.514644 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.514658 | orchestrator | 2025-09-19 11:31:12.514671 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-19 11:31:12.514695 | orchestrator | Friday 19 September 2025 11:31:08 +0000 (0:00:01.322) 0:02:25.613 ****** 2025-09-19 11:31:12.514709 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:31:12.514722 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:31:12.514745 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:31:12.514758 | orchestrator | 2025-09-19 11:31:12.514772 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:31:12.514786 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 11:31:12.514799 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-19 11:31:12.514812 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-19 11:31:12.514825 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:31:12.514839 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:31:12.514853 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:31:12.514866 | orchestrator | 2025-09-19 11:31:12.514880 | orchestrator | 2025-09-19 11:31:12.514894 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:31:12.514908 | orchestrator | Friday 19 September 2025 11:31:10 +0000 (0:00:02.064) 0:02:27.677 ****** 2025-09-19 11:31:12.514922 | orchestrator | =============================================================================== 2025-09-19 11:31:12.514935 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 33.98s 2025-09-19 11:31:12.514948 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.05s 2025-09-19 11:31:12.514963 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.60s 2025-09-19 11:31:12.514975 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.58s 2025-09-19 11:31:12.514986 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.84s 2025-09-19 11:31:12.514994 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.13s 2025-09-19 11:31:12.515001 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.73s 2025-09-19 11:31:12.515009 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.44s 2025-09-19 11:31:12.515017 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.39s 2025-09-19 11:31:12.515025 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.29s 2025-09-19 11:31:12.515032 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 2.06s 2025-09-19 11:31:12.515040 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.99s 2025-09-19 11:31:12.515047 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.84s 2025-09-19 11:31:12.515055 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.70s 2025-09-19 11:31:12.515062 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.69s 2025-09-19 11:31:12.515070 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.58s 2025-09-19 11:31:12.515078 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.40s 2025-09-19 11:31:12.515085 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.32s 2025-09-19 11:31:12.515098 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.30s 2025-09-19 11:31:12.515113 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.21s 2025-09-19 11:31:15.529899 | orchestrator | 2025-09-19 11:31:15 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:15.530001 | orchestrator | 2025-09-19 11:31:15 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:15.530083 | orchestrator | 2025-09-19 11:31:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:18.579600 | orchestrator | 2025-09-19 11:31:18 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:18.582348 | orchestrator | 2025-09-19 11:31:18 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:18.582379 | orchestrator | 2025-09-19 11:31:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:21.639613 | orchestrator | 2025-09-19 11:31:21 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:21.640061 | orchestrator | 2025-09-19 11:31:21 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:21.640189 | orchestrator | 2025-09-19 11:31:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:24.676218 | orchestrator | 2025-09-19 11:31:24 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:24.678879 | orchestrator | 2025-09-19 11:31:24 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:24.678896 | orchestrator | 2025-09-19 11:31:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:27.711787 | orchestrator | 2025-09-19 11:31:27 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:27.713301 | orchestrator | 2025-09-19 11:31:27 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:27.713827 | orchestrator | 2025-09-19 11:31:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:30.757859 | orchestrator | 2025-09-19 11:31:30 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:30.759541 | orchestrator | 2025-09-19 11:31:30 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:30.759605 | orchestrator | 2025-09-19 11:31:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:33.813275 | orchestrator | 2025-09-19 11:31:33 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:33.814358 | orchestrator | 2025-09-19 11:31:33 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:33.815037 | orchestrator | 2025-09-19 11:31:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:36.854748 | orchestrator | 2025-09-19 11:31:36 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:36.854848 | orchestrator | 2025-09-19 11:31:36 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:36.854920 | orchestrator | 2025-09-19 11:31:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:39.898307 | orchestrator | 2025-09-19 11:31:39 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:39.900919 | orchestrator | 2025-09-19 11:31:39 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:39.900997 | orchestrator | 2025-09-19 11:31:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:42.947738 | orchestrator | 2025-09-19 11:31:42 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:42.949517 | orchestrator | 2025-09-19 11:31:42 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:42.949578 | orchestrator | 2025-09-19 11:31:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:46.013564 | orchestrator | 2025-09-19 11:31:46 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:46.018669 | orchestrator | 2025-09-19 11:31:46 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:46.018716 | orchestrator | 2025-09-19 11:31:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:49.071410 | orchestrator | 2025-09-19 11:31:49 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:49.073570 | orchestrator | 2025-09-19 11:31:49 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:49.073645 | orchestrator | 2025-09-19 11:31:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:52.121249 | orchestrator | 2025-09-19 11:31:52 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:52.124033 | orchestrator | 2025-09-19 11:31:52 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:52.124108 | orchestrator | 2025-09-19 11:31:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:55.166616 | orchestrator | 2025-09-19 11:31:55 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:55.167699 | orchestrator | 2025-09-19 11:31:55 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:55.167811 | orchestrator | 2025-09-19 11:31:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:31:58.203675 | orchestrator | 2025-09-19 11:31:58 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:31:58.205802 | orchestrator | 2025-09-19 11:31:58 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:31:58.206263 | orchestrator | 2025-09-19 11:31:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:01.245191 | orchestrator | 2025-09-19 11:32:01 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:01.247032 | orchestrator | 2025-09-19 11:32:01 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:01.247341 | orchestrator | 2025-09-19 11:32:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:04.285992 | orchestrator | 2025-09-19 11:32:04 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:04.288515 | orchestrator | 2025-09-19 11:32:04 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:04.288850 | orchestrator | 2025-09-19 11:32:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:07.327939 | orchestrator | 2025-09-19 11:32:07 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:07.329258 | orchestrator | 2025-09-19 11:32:07 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:07.329283 | orchestrator | 2025-09-19 11:32:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:10.366601 | orchestrator | 2025-09-19 11:32:10 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:10.372197 | orchestrator | 2025-09-19 11:32:10 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:10.372276 | orchestrator | 2025-09-19 11:32:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:13.407364 | orchestrator | 2025-09-19 11:32:13 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:13.408446 | orchestrator | 2025-09-19 11:32:13 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:13.408486 | orchestrator | 2025-09-19 11:32:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:16.462793 | orchestrator | 2025-09-19 11:32:16 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:16.462865 | orchestrator | 2025-09-19 11:32:16 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:16.462879 | orchestrator | 2025-09-19 11:32:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:19.508446 | orchestrator | 2025-09-19 11:32:19 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:19.510490 | orchestrator | 2025-09-19 11:32:19 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:19.510523 | orchestrator | 2025-09-19 11:32:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:22.540230 | orchestrator | 2025-09-19 11:32:22 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:22.541177 | orchestrator | 2025-09-19 11:32:22 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:22.541768 | orchestrator | 2025-09-19 11:32:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:25.575406 | orchestrator | 2025-09-19 11:32:25 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:25.576749 | orchestrator | 2025-09-19 11:32:25 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:25.576806 | orchestrator | 2025-09-19 11:32:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:28.615459 | orchestrator | 2025-09-19 11:32:28 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:28.615543 | orchestrator | 2025-09-19 11:32:28 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:28.615623 | orchestrator | 2025-09-19 11:32:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:31.659163 | orchestrator | 2025-09-19 11:32:31 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:31.659618 | orchestrator | 2025-09-19 11:32:31 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:31.659649 | orchestrator | 2025-09-19 11:32:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:34.697809 | orchestrator | 2025-09-19 11:32:34 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:34.697907 | orchestrator | 2025-09-19 11:32:34 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:34.697931 | orchestrator | 2025-09-19 11:32:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:37.735648 | orchestrator | 2025-09-19 11:32:37 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:37.736522 | orchestrator | 2025-09-19 11:32:37 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:37.736557 | orchestrator | 2025-09-19 11:32:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:40.786676 | orchestrator | 2025-09-19 11:32:40 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:40.788303 | orchestrator | 2025-09-19 11:32:40 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:40.788345 | orchestrator | 2025-09-19 11:32:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:43.824761 | orchestrator | 2025-09-19 11:32:43 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:43.826133 | orchestrator | 2025-09-19 11:32:43 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:43.826182 | orchestrator | 2025-09-19 11:32:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:46.876731 | orchestrator | 2025-09-19 11:32:46 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:46.876828 | orchestrator | 2025-09-19 11:32:46 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:46.876842 | orchestrator | 2025-09-19 11:32:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:49.932221 | orchestrator | 2025-09-19 11:32:49 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:49.933856 | orchestrator | 2025-09-19 11:32:49 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:49.933911 | orchestrator | 2025-09-19 11:32:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:52.969027 | orchestrator | 2025-09-19 11:32:52 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:52.972788 | orchestrator | 2025-09-19 11:32:52 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:52.972850 | orchestrator | 2025-09-19 11:32:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:56.010417 | orchestrator | 2025-09-19 11:32:56 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:56.011330 | orchestrator | 2025-09-19 11:32:56 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:56.011395 | orchestrator | 2025-09-19 11:32:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:32:59.058929 | orchestrator | 2025-09-19 11:32:59 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:32:59.059020 | orchestrator | 2025-09-19 11:32:59 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:32:59.059028 | orchestrator | 2025-09-19 11:32:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:02.087858 | orchestrator | 2025-09-19 11:33:02 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:02.088056 | orchestrator | 2025-09-19 11:33:02 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:02.088078 | orchestrator | 2025-09-19 11:33:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:05.126903 | orchestrator | 2025-09-19 11:33:05 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:05.127377 | orchestrator | 2025-09-19 11:33:05 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:05.127652 | orchestrator | 2025-09-19 11:33:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:08.164205 | orchestrator | 2025-09-19 11:33:08 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:08.164362 | orchestrator | 2025-09-19 11:33:08 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:08.164380 | orchestrator | 2025-09-19 11:33:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:11.208523 | orchestrator | 2025-09-19 11:33:11 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:11.212768 | orchestrator | 2025-09-19 11:33:11 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:11.212856 | orchestrator | 2025-09-19 11:33:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:14.242997 | orchestrator | 2025-09-19 11:33:14 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:14.245044 | orchestrator | 2025-09-19 11:33:14 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:14.245190 | orchestrator | 2025-09-19 11:33:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:17.282644 | orchestrator | 2025-09-19 11:33:17 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:17.283297 | orchestrator | 2025-09-19 11:33:17 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:17.283323 | orchestrator | 2025-09-19 11:33:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:20.317582 | orchestrator | 2025-09-19 11:33:20 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:20.318454 | orchestrator | 2025-09-19 11:33:20 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:20.318480 | orchestrator | 2025-09-19 11:33:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:23.362160 | orchestrator | 2025-09-19 11:33:23 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:23.362978 | orchestrator | 2025-09-19 11:33:23 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:23.363026 | orchestrator | 2025-09-19 11:33:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:26.409768 | orchestrator | 2025-09-19 11:33:26 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:26.411397 | orchestrator | 2025-09-19 11:33:26 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:26.411433 | orchestrator | 2025-09-19 11:33:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:29.456853 | orchestrator | 2025-09-19 11:33:29 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:29.459612 | orchestrator | 2025-09-19 11:33:29 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:29.459653 | orchestrator | 2025-09-19 11:33:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:32.504528 | orchestrator | 2025-09-19 11:33:32 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:32.505973 | orchestrator | 2025-09-19 11:33:32 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:32.506109 | orchestrator | 2025-09-19 11:33:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:35.551658 | orchestrator | 2025-09-19 11:33:35 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:35.553041 | orchestrator | 2025-09-19 11:33:35 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:35.553076 | orchestrator | 2025-09-19 11:33:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:38.595559 | orchestrator | 2025-09-19 11:33:38 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:38.595984 | orchestrator | 2025-09-19 11:33:38 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:38.596013 | orchestrator | 2025-09-19 11:33:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:41.634182 | orchestrator | 2025-09-19 11:33:41 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:41.635060 | orchestrator | 2025-09-19 11:33:41 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:41.635091 | orchestrator | 2025-09-19 11:33:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:44.679289 | orchestrator | 2025-09-19 11:33:44 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:44.680955 | orchestrator | 2025-09-19 11:33:44 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:44.680988 | orchestrator | 2025-09-19 11:33:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:47.729238 | orchestrator | 2025-09-19 11:33:47 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:47.731077 | orchestrator | 2025-09-19 11:33:47 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:47.731140 | orchestrator | 2025-09-19 11:33:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:50.786503 | orchestrator | 2025-09-19 11:33:50 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:50.787450 | orchestrator | 2025-09-19 11:33:50 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:50.787561 | orchestrator | 2025-09-19 11:33:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:53.839379 | orchestrator | 2025-09-19 11:33:53 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:53.840054 | orchestrator | 2025-09-19 11:33:53 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state STARTED 2025-09-19 11:33:53.840079 | orchestrator | 2025-09-19 11:33:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:56.887824 | orchestrator | 2025-09-19 11:33:56 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:33:56.888316 | orchestrator | 2025-09-19 11:33:56 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:56.890094 | orchestrator | 2025-09-19 11:33:56 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:33:56.899682 | orchestrator | 2025-09-19 11:33:56 | INFO  | Task 16f522e5-163b-4ef0-90eb-c830f9e24634 is in state SUCCESS 2025-09-19 11:33:56.902494 | orchestrator | 2025-09-19 11:33:56.902530 | orchestrator | 2025-09-19 11:33:56.902540 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:33:56.902549 | orchestrator | 2025-09-19 11:33:56.902557 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:33:56.902566 | orchestrator | Friday 19 September 2025 11:27:26 +0000 (0:00:00.312) 0:00:00.312 ****** 2025-09-19 11:33:56.902574 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.902583 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.902591 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.902599 | orchestrator | 2025-09-19 11:33:56.902608 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:33:56.902616 | orchestrator | Friday 19 September 2025 11:27:26 +0000 (0:00:00.314) 0:00:00.627 ****** 2025-09-19 11:33:56.902625 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-19 11:33:56.902633 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-19 11:33:56.902640 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-19 11:33:56.902648 | orchestrator | 2025-09-19 11:33:56.902656 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-19 11:33:56.902686 | orchestrator | 2025-09-19 11:33:56.902694 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-19 11:33:56.902701 | orchestrator | Friday 19 September 2025 11:27:26 +0000 (0:00:00.485) 0:00:01.112 ****** 2025-09-19 11:33:56.902735 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.902743 | orchestrator | 2025-09-19 11:33:56.902751 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-19 11:33:56.902759 | orchestrator | Friday 19 September 2025 11:27:27 +0000 (0:00:00.840) 0:00:01.953 ****** 2025-09-19 11:33:56.902767 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.902774 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.902782 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.902790 | orchestrator | 2025-09-19 11:33:56.902798 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-19 11:33:56.902805 | orchestrator | Friday 19 September 2025 11:27:28 +0000 (0:00:00.906) 0:00:02.859 ****** 2025-09-19 11:33:56.902813 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.902821 | orchestrator | 2025-09-19 11:33:56.902852 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-19 11:33:56.902861 | orchestrator | Friday 19 September 2025 11:27:29 +0000 (0:00:01.046) 0:00:03.906 ****** 2025-09-19 11:33:56.902869 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.902876 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.902884 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.902892 | orchestrator | 2025-09-19 11:33:56.902900 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-19 11:33:56.902908 | orchestrator | Friday 19 September 2025 11:27:30 +0000 (0:00:00.851) 0:00:04.757 ****** 2025-09-19 11:33:56.902930 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 11:33:56.902938 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 11:33:56.902946 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 11:33:56.902953 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 11:33:56.902961 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 11:33:56.902968 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 11:33:56.902977 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 11:33:56.903016 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 11:33:56.903025 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 11:33:56.903033 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 11:33:56.903040 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 11:33:56.903048 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 11:33:56.903056 | orchestrator | 2025-09-19 11:33:56.903064 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 11:33:56.903071 | orchestrator | Friday 19 September 2025 11:27:34 +0000 (0:00:03.643) 0:00:08.401 ****** 2025-09-19 11:33:56.903079 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-19 11:33:56.903088 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-19 11:33:56.903096 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-19 11:33:56.903103 | orchestrator | 2025-09-19 11:33:56.903111 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 11:33:56.903119 | orchestrator | Friday 19 September 2025 11:27:35 +0000 (0:00:00.998) 0:00:09.399 ****** 2025-09-19 11:33:56.903127 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-19 11:33:56.903135 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-19 11:33:56.903151 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-19 11:33:56.903159 | orchestrator | 2025-09-19 11:33:56.903167 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 11:33:56.903175 | orchestrator | Friday 19 September 2025 11:27:37 +0000 (0:00:02.043) 0:00:11.443 ****** 2025-09-19 11:33:56.903183 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-19 11:33:56.903191 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.903210 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-19 11:33:56.903218 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.903226 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-19 11:33:56.903234 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.903242 | orchestrator | 2025-09-19 11:33:56.903250 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-19 11:33:56.903258 | orchestrator | Friday 19 September 2025 11:27:38 +0000 (0:00:01.503) 0:00:12.946 ****** 2025-09-19 11:33:56.903269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.903283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.903307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.903316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.903325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.903344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.903354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:33:56.903363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:33:56.903371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:33:56.903380 | orchestrator | 2025-09-19 11:33:56.903388 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-19 11:33:56.903396 | orchestrator | Friday 19 September 2025 11:27:40 +0000 (0:00:02.083) 0:00:15.030 ****** 2025-09-19 11:33:56.903404 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.903412 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.903420 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.903428 | orchestrator | 2025-09-19 11:33:56.903436 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-19 11:33:56.903447 | orchestrator | Friday 19 September 2025 11:27:42 +0000 (0:00:01.744) 0:00:16.774 ****** 2025-09-19 11:33:56.903455 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-19 11:33:56.903463 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-19 11:33:56.903471 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-19 11:33:56.903479 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-19 11:33:56.903600 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-19 11:33:56.903607 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-19 11:33:56.903615 | orchestrator | 2025-09-19 11:33:56.903623 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-19 11:33:56.903631 | orchestrator | Friday 19 September 2025 11:27:44 +0000 (0:00:01.998) 0:00:18.773 ****** 2025-09-19 11:33:56.903645 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.903653 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.903661 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.903668 | orchestrator | 2025-09-19 11:33:56.903676 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-19 11:33:56.903684 | orchestrator | Friday 19 September 2025 11:27:46 +0000 (0:00:01.606) 0:00:20.379 ****** 2025-09-19 11:33:56.903692 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.903700 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.903707 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.903715 | orchestrator | 2025-09-19 11:33:56.903723 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-19 11:33:56.903731 | orchestrator | Friday 19 September 2025 11:27:49 +0000 (0:00:03.317) 0:00:23.696 ****** 2025-09-19 11:33:56.903739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.903755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.903764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.903773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b30313a434c9781db8be115b7c1a38a33255d4a7', '__omit_place_holder__b30313a434c9781db8be115b7c1a38a33255d4a7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 11:33:56.903781 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.903793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.903818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.903851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.903878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b30313a434c9781db8be115b7c1a38a33255d4a7', '__omit_place_holder__b30313a434c9781db8be115b7c1a38a33255d4a7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 11:33:56.903895 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.903908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.903921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.903934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.903963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b30313a434c9781db8be115b7c1a38a33255d4a7', '__omit_place_holder__b30313a434c9781db8be115b7c1a38a33255d4a7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 11:33:56.903978 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.903991 | orchestrator | 2025-09-19 11:33:56.904004 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-19 11:33:56.904033 | orchestrator | Friday 19 September 2025 11:27:50 +0000 (0:00:00.754) 0:00:24.450 ****** 2025-09-19 11:33:56.904045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.904059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.904068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.904078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.904093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.904121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b30313a434c9781db8be115b7c1a38a33255d4a7', '__omit_place_holder__b30313a434c9781db8be115b7c1a38a33255d4a7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 11:33:56.904135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.904148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.904167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b30313a434c9781db8be115b7c1a38a33255d4a7', '__omit_place_holder__b30313a434c9781db8be115b7c1a38a33255d4a7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 11:33:56.904180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.904194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.904228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b30313a434c9781db8be115b7c1a38a33255d4a7', '__omit_place_holder__b30313a434c9781db8be115b7c1a38a33255d4a7'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 11:33:56.904243 | orchestrator | 2025-09-19 11:33:56.904256 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-19 11:33:56.904268 | orchestrator | Friday 19 September 2025 11:27:53 +0000 (0:00:03.374) 0:00:27.825 ****** 2025-09-19 11:33:56.904276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.904284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.904330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.904339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.904348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.904371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.904381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:33:56.904391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:33:56.904401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:33:56.904411 | orchestrator | 2025-09-19 11:33:56.904420 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-19 11:33:56.904430 | orchestrator | Friday 19 September 2025 11:27:57 +0000 (0:00:04.365) 0:00:32.190 ****** 2025-09-19 11:33:56.904439 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 11:33:56.905014 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 11:33:56.905041 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 11:33:56.905051 | orchestrator | 2025-09-19 11:33:56.905061 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-19 11:33:56.905070 | orchestrator | Friday 19 September 2025 11:28:01 +0000 (0:00:03.956) 0:00:36.147 ****** 2025-09-19 11:33:56.905080 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 11:33:56.905090 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 11:33:56.905099 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 11:33:56.905122 | orchestrator | 2025-09-19 11:33:56.905132 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-19 11:33:56.905141 | orchestrator | Friday 19 September 2025 11:28:07 +0000 (0:00:05.234) 0:00:41.381 ****** 2025-09-19 11:33:56.905151 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.905160 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.905170 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.905179 | orchestrator | 2025-09-19 11:33:56.905189 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-19 11:33:56.905198 | orchestrator | Friday 19 September 2025 11:28:08 +0000 (0:00:01.032) 0:00:42.413 ****** 2025-09-19 11:33:56.905208 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 11:33:56.905219 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 11:33:56.905229 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 11:33:56.905238 | orchestrator | 2025-09-19 11:33:56.905248 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-19 11:33:56.905257 | orchestrator | Friday 19 September 2025 11:28:10 +0000 (0:00:02.661) 0:00:45.075 ****** 2025-09-19 11:33:56.905267 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 11:33:56.905276 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 11:33:56.905286 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 11:33:56.905295 | orchestrator | 2025-09-19 11:33:56.905310 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-19 11:33:56.905320 | orchestrator | Friday 19 September 2025 11:28:14 +0000 (0:00:03.336) 0:00:48.411 ****** 2025-09-19 11:33:56.905329 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-19 11:33:56.905339 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-19 11:33:56.905348 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-19 11:33:56.905358 | orchestrator | 2025-09-19 11:33:56.905367 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-19 11:33:56.905377 | orchestrator | Friday 19 September 2025 11:28:16 +0000 (0:00:01.853) 0:00:50.265 ****** 2025-09-19 11:33:56.905386 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-19 11:33:56.905395 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-19 11:33:56.905404 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-19 11:33:56.905414 | orchestrator | 2025-09-19 11:33:56.905423 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-19 11:33:56.905433 | orchestrator | Friday 19 September 2025 11:28:18 +0000 (0:00:02.103) 0:00:52.368 ****** 2025-09-19 11:33:56.905442 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.905452 | orchestrator | 2025-09-19 11:33:56.905461 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-19 11:33:56.905471 | orchestrator | Friday 19 September 2025 11:28:19 +0000 (0:00:01.144) 0:00:53.513 ****** 2025-09-19 11:33:56.905481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.905507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.905519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.905529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.905544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.905554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.905564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:33:56.905580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:33:56.905596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:33:56.905606 | orchestrator | 2025-09-19 11:33:56.905618 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-19 11:33:56.905629 | orchestrator | Friday 19 September 2025 11:28:23 +0000 (0:00:04.297) 0:00:57.810 ****** 2025-09-19 11:33:56.905640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.905652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.905668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.905679 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.905690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.905702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.905725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.905737 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.905748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.905760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.905772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.905783 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.905793 | orchestrator | 2025-09-19 11:33:56.905804 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-19 11:33:56.905815 | orchestrator | Friday 19 September 2025 11:28:24 +0000 (0:00:01.129) 0:00:58.939 ****** 2025-09-19 11:33:56.906167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.906214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.906236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.906248 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.906258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.906268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.906278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.906293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.906303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.906320 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.906330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.906340 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.906350 | orchestrator | 2025-09-19 11:33:56.906360 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-19 11:33:56.906369 | orchestrator | Friday 19 September 2025 11:28:26 +0000 (0:00:02.240) 0:01:01.179 ****** 2025-09-19 11:33:56.906385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.906396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.906406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.906416 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.906433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.906450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.906460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.906470 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.906485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.906496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.906506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.906516 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.906525 | orchestrator | 2025-09-19 11:33:56.906535 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-19 11:33:56.906545 | orchestrator | Friday 19 September 2025 11:28:29 +0000 (0:00:02.780) 0:01:03.960 ****** 2025-09-19 11:33:56.906560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.906577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.906587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.906597 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.906607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.906633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.906644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.906654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.906664 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.906682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.906692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.906702 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.906711 | orchestrator | 2025-09-19 11:33:56.906721 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-19 11:33:56.906731 | orchestrator | Friday 19 September 2025 11:28:31 +0000 (0:00:01.611) 0:01:05.572 ****** 2025-09-19 11:33:56.906740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.906757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.906768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.906778 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.906787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.906806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.906817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.906827 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.906872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.906888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.906899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.906909 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.906918 | orchestrator | 2025-09-19 11:33:56.906957 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-19 11:33:56.906968 | orchestrator | Friday 19 September 2025 11:28:32 +0000 (0:00:01.078) 0:01:06.651 ****** 2025-09-19 11:33:56.906978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.907000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.907010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.907020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.907030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.907048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.907058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.907076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.907085 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.907095 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.907109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.907120 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.907129 | orchestrator | 2025-09-19 11:33:56.907139 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-19 11:33:56.907148 | orchestrator | Friday 19 September 2025 11:28:33 +0000 (0:00:01.006) 0:01:07.657 ****** 2025-09-19 11:33:56.907158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.907168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.907236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.907247 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.907257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.907275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.907285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.907299 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.907320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.907331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.907340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.907350 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.907360 | orchestrator | 2025-09-19 11:33:56.907369 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-19 11:33:56.907384 | orchestrator | Friday 19 September 2025 11:28:34 +0000 (0:00:00.818) 0:01:08.475 ****** 2025-09-19 11:33:56.907394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.907412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.907422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.907432 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.907449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.907460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.907470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.907479 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.907495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 11:33:56.907512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 11:33:56.907522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 11:33:56.907532 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.907541 | orchestrator | 2025-09-19 11:33:56.907551 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-19 11:33:56.907561 | orchestrator | Friday 19 September 2025 11:28:35 +0000 (0:00:01.172) 0:01:09.648 ****** 2025-09-19 11:33:56.907570 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 11:33:56.907580 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 11:33:56.907590 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 11:33:56.907599 | orchestrator | 2025-09-19 11:33:56.907613 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-19 11:33:56.907622 | orchestrator | Friday 19 September 2025 11:28:37 +0000 (0:00:01.654) 0:01:11.302 ****** 2025-09-19 11:33:56.907632 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 11:33:56.907642 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 11:33:56.907651 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 11:33:56.907661 | orchestrator | 2025-09-19 11:33:56.907670 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-19 11:33:56.907680 | orchestrator | Friday 19 September 2025 11:28:38 +0000 (0:00:01.517) 0:01:12.820 ****** 2025-09-19 11:33:56.907689 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 11:33:56.907699 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 11:33:56.907708 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 11:33:56.907718 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.907727 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 11:33:56.907737 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.907746 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 11:33:56.907755 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 11:33:56.907770 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.907780 | orchestrator | 2025-09-19 11:33:56.907790 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-19 11:33:56.907799 | orchestrator | Friday 19 September 2025 11:28:41 +0000 (0:00:02.863) 0:01:15.684 ****** 2025-09-19 11:33:56.907814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.907826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.907854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.907869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.907880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 11:33:56.907890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:33:56.907906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:33:56.907924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 11:33:56.907934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 11:33:56.907944 | orchestrator | 2025-09-19 11:33:56.907954 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-19 11:33:56.907964 | orchestrator | Friday 19 September 2025 11:28:45 +0000 (0:00:04.048) 0:01:19.732 ****** 2025-09-19 11:33:56.907974 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.907983 | orchestrator | 2025-09-19 11:33:56.907993 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-19 11:33:56.908003 | orchestrator | Friday 19 September 2025 11:28:46 +0000 (0:00:00.801) 0:01:20.533 ****** 2025-09-19 11:33:56.908018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 11:33:56.908030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.908057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 11:33:56.908094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.908104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 11:33:56.908144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.908161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908181 | orchestrator | 2025-09-19 11:33:56.908191 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-19 11:33:56.908201 | orchestrator | Friday 19 September 2025 11:28:51 +0000 (0:00:04.716) 0:01:25.250 ****** 2025-09-19 11:33:56.908215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 11:33:56.908225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.908240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 11:33:56.908267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 11:33:56.908287 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.908301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.908317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.908327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908365 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.908375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908384 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.908394 | orchestrator | 2025-09-19 11:33:56.908404 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-19 11:33:56.908413 | orchestrator | Friday 19 September 2025 11:28:52 +0000 (0:00:00.966) 0:01:26.216 ****** 2025-09-19 11:33:56.908423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 11:33:56.908434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 11:33:56.908443 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.908453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 11:33:56.908468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 11:33:56.908485 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.908495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 11:33:56.908505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 11:33:56.908515 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.908524 | orchestrator | 2025-09-19 11:33:56.908534 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-19 11:33:56.908543 | orchestrator | Friday 19 September 2025 11:28:52 +0000 (0:00:00.861) 0:01:27.078 ****** 2025-09-19 11:33:56.908553 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.908562 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.908572 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.908581 | orchestrator | 2025-09-19 11:33:56.908591 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-19 11:33:56.908601 | orchestrator | Friday 19 September 2025 11:28:54 +0000 (0:00:01.865) 0:01:28.943 ****** 2025-09-19 11:33:56.908610 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.908620 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.908629 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.908638 | orchestrator | 2025-09-19 11:33:56.908648 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-19 11:33:56.908657 | orchestrator | Friday 19 September 2025 11:28:56 +0000 (0:00:02.090) 0:01:31.034 ****** 2025-09-19 11:33:56.908670 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.908685 | orchestrator | 2025-09-19 11:33:56.908701 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-19 11:33:56.908717 | orchestrator | Friday 19 September 2025 11:28:58 +0000 (0:00:01.258) 0:01:32.292 ****** 2025-09-19 11:33:56.908742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.908759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.908803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.908894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.908935 | orchestrator | 2025-09-19 11:33:56.908951 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-19 11:33:56.908974 | orchestrator | Friday 19 September 2025 11:29:02 +0000 (0:00:04.080) 0:01:36.372 ****** 2025-09-19 11:33:56.908992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.909003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.909022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.909032 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.909043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.909059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.909069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.909079 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.909089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.909417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.909438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.909457 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.909467 | orchestrator | 2025-09-19 11:33:56.909477 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-19 11:33:56.909487 | orchestrator | Friday 19 September 2025 11:29:02 +0000 (0:00:00.772) 0:01:37.145 ****** 2025-09-19 11:33:56.909497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 11:33:56.909507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 11:33:56.909518 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.909527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 11:33:56.909537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 11:33:56.909547 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.909556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 11:33:56.909570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 11:33:56.909580 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.909590 | orchestrator | 2025-09-19 11:33:56.909600 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-19 11:33:56.909609 | orchestrator | Friday 19 September 2025 11:29:03 +0000 (0:00:00.762) 0:01:37.908 ****** 2025-09-19 11:33:56.909619 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.909628 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.909637 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.909647 | orchestrator | 2025-09-19 11:33:56.909656 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-19 11:33:56.909674 | orchestrator | Friday 19 September 2025 11:29:04 +0000 (0:00:01.275) 0:01:39.183 ****** 2025-09-19 11:33:56.909690 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.909708 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.909725 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.909740 | orchestrator | 2025-09-19 11:33:56.909750 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-19 11:33:56.909759 | orchestrator | Friday 19 September 2025 11:29:06 +0000 (0:00:01.926) 0:01:41.110 ****** 2025-09-19 11:33:56.909768 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.909778 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.909787 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.909796 | orchestrator | 2025-09-19 11:33:56.909805 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-19 11:33:56.909815 | orchestrator | Friday 19 September 2025 11:29:07 +0000 (0:00:00.576) 0:01:41.687 ****** 2025-09-19 11:33:56.909824 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.909892 | orchestrator | 2025-09-19 11:33:56.909903 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-19 11:33:56.909926 | orchestrator | Friday 19 September 2025 11:29:08 +0000 (0:00:00.726) 0:01:42.413 ****** 2025-09-19 11:33:56.909955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 11:33:56.909974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 11:33:56.909992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 11:33:56.910010 | orchestrator | 2025-09-19 11:33:56.910080 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-19 11:33:56.910097 | orchestrator | Friday 19 September 2025 11:29:10 +0000 (0:00:02.689) 0:01:45.103 ****** 2025-09-19 11:33:56.910206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 11:33:56.910216 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.910243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 11:33:56.910262 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.910272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 11:33:56.910283 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.910292 | orchestrator | 2025-09-19 11:33:56.910302 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-19 11:33:56.910312 | orchestrator | Friday 19 September 2025 11:29:13 +0000 (0:00:02.189) 0:01:47.293 ****** 2025-09-19 11:33:56.910321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 11:33:56.910332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 11:33:56.910342 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.910355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 11:33:56.910364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 11:33:56.910373 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.910382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 11:33:56.910396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 11:33:56.910404 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.910413 | orchestrator | 2025-09-19 11:33:56.910422 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-19 11:33:56.910430 | orchestrator | Friday 19 September 2025 11:29:14 +0000 (0:00:01.711) 0:01:49.005 ****** 2025-09-19 11:33:56.910439 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.910447 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.910456 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.910464 | orchestrator | 2025-09-19 11:33:56.910477 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-19 11:33:56.910486 | orchestrator | Friday 19 September 2025 11:29:15 +0000 (0:00:00.422) 0:01:49.427 ****** 2025-09-19 11:33:56.910495 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.910503 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.910512 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.910520 | orchestrator | 2025-09-19 11:33:56.910529 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-19 11:33:56.910538 | orchestrator | Friday 19 September 2025 11:29:16 +0000 (0:00:01.646) 0:01:51.073 ****** 2025-09-19 11:33:56.910546 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.910554 | orchestrator | 2025-09-19 11:33:56.910563 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-19 11:33:56.910572 | orchestrator | Friday 19 September 2025 11:29:17 +0000 (0:00:00.992) 0:01:52.066 ****** 2025-09-19 11:33:56.910581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.910591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.910649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.910700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910727 | orchestrator | 2025-09-19 11:33:56.910736 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-19 11:33:56.910744 | orchestrator | Friday 19 September 2025 11:29:22 +0000 (0:00:04.641) 0:01:56.708 ****** 2025-09-19 11:33:56.910757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.910771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910803 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.910812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.910852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910880 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.910894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.910904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.910941 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.910950 | orchestrator | 2025-09-19 11:33:56.910958 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-19 11:33:56.910967 | orchestrator | Friday 19 September 2025 11:29:23 +0000 (0:00:01.009) 0:01:57.717 ****** 2025-09-19 11:33:56.910976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 11:33:56.910985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 11:33:56.910994 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.911003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 11:33:56.911016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 11:33:56.911025 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.911034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 11:33:56.911042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 11:33:56.911051 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.911059 | orchestrator | 2025-09-19 11:33:56.911068 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-19 11:33:56.911077 | orchestrator | Friday 19 September 2025 11:29:25 +0000 (0:00:01.560) 0:01:59.278 ****** 2025-09-19 11:33:56.911085 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.911094 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.911102 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.911111 | orchestrator | 2025-09-19 11:33:56.911119 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-19 11:33:56.911133 | orchestrator | Friday 19 September 2025 11:29:26 +0000 (0:00:01.362) 0:02:00.640 ****** 2025-09-19 11:33:56.911142 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.911150 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.911159 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.911167 | orchestrator | 2025-09-19 11:33:56.911176 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-19 11:33:56.911184 | orchestrator | Friday 19 September 2025 11:29:28 +0000 (0:00:02.017) 0:02:02.657 ****** 2025-09-19 11:33:56.911193 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.911201 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.911209 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.911218 | orchestrator | 2025-09-19 11:33:56.911226 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-19 11:33:56.911234 | orchestrator | Friday 19 September 2025 11:29:28 +0000 (0:00:00.327) 0:02:02.985 ****** 2025-09-19 11:33:56.911243 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.911251 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.911259 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.911268 | orchestrator | 2025-09-19 11:33:56.911276 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-19 11:33:56.911285 | orchestrator | Friday 19 September 2025 11:29:29 +0000 (0:00:00.553) 0:02:03.538 ****** 2025-09-19 11:33:56.911293 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.911302 | orchestrator | 2025-09-19 11:33:56.911310 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-19 11:33:56.911319 | orchestrator | Friday 19 September 2025 11:29:30 +0000 (0:00:00.783) 0:02:04.321 ****** 2025-09-19 11:33:56.911332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:33:56.911341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:33:56.911562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:33:56.911637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:33:56.911652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:33:56.911724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:33:56.911734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911783 | orchestrator | 2025-09-19 11:33:56.911792 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-19 11:33:56.911800 | orchestrator | Friday 19 September 2025 11:29:33 +0000 (0:00:03.829) 0:02:08.151 ****** 2025-09-19 11:33:56.911819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:33:56.911844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:33:56.911854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:33:56.911904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:33:56.911923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911941 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.911954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.911991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:33:56.912000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.912009 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.912018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:33:56.912030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.912060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.912080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.912089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.912098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.912107 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.912116 | orchestrator | 2025-09-19 11:33:56.912124 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-19 11:33:56.912133 | orchestrator | Friday 19 September 2025 11:29:35 +0000 (0:00:01.402) 0:02:09.554 ****** 2025-09-19 11:33:56.912142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 11:33:56.912151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 11:33:56.912161 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.912169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 11:33:56.912178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 11:33:56.912187 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.912201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 11:33:56.912212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 11:33:56.912228 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.912238 | orchestrator | 2025-09-19 11:33:56.912248 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-19 11:33:56.912258 | orchestrator | Friday 19 September 2025 11:29:36 +0000 (0:00:01.013) 0:02:10.568 ****** 2025-09-19 11:33:56.912267 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.912277 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.912287 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.912296 | orchestrator | 2025-09-19 11:33:56.912306 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-19 11:33:56.912316 | orchestrator | Friday 19 September 2025 11:29:37 +0000 (0:00:01.381) 0:02:11.949 ****** 2025-09-19 11:33:56.912325 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.912335 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.912344 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.912353 | orchestrator | 2025-09-19 11:33:56.912363 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-19 11:33:56.912372 | orchestrator | Friday 19 September 2025 11:29:39 +0000 (0:00:01.904) 0:02:13.854 ****** 2025-09-19 11:33:56.912382 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.912391 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.912401 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.912410 | orchestrator | 2025-09-19 11:33:56.912420 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-19 11:33:56.912430 | orchestrator | Friday 19 September 2025 11:29:40 +0000 (0:00:00.417) 0:02:14.271 ****** 2025-09-19 11:33:56.912439 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.912449 | orchestrator | 2025-09-19 11:33:56.912554 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-19 11:33:56.912563 | orchestrator | Friday 19 September 2025 11:29:40 +0000 (0:00:00.794) 0:02:15.066 ****** 2025-09-19 11:33:56.912582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:33:56.912599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 11:33:56.912873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:33:56.912897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:33:56.912921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 11:33:56.912940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 11:33:56.912955 | orchestrator | 2025-09-19 11:33:56.912964 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-19 11:33:56.912973 | orchestrator | Friday 19 September 2025 11:29:44 +0000 (0:00:04.001) 0:02:19.068 ****** 2025-09-19 11:33:56.912988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:33:56.912999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 11:33:56.913014 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.913028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:33:56.913045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 11:33:56.913062 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.913075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:33:56.913091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 11:33:56.913106 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.913115 | orchestrator | 2025-09-19 11:33:56.913124 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-19 11:33:56.913133 | orchestrator | Friday 19 September 2025 11:29:47 +0000 (0:00:02.970) 0:02:22.039 ****** 2025-09-19 11:33:56.913142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 11:33:56.913168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 11:33:56.913178 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.913187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 11:33:56.913196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 11:33:56.913205 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.913218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 11:33:56.913227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 11:33:56.913236 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.913245 | orchestrator | 2025-09-19 11:33:56.913254 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-19 11:33:56.913262 | orchestrator | Friday 19 September 2025 11:29:50 +0000 (0:00:03.063) 0:02:25.102 ****** 2025-09-19 11:33:56.913271 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.913285 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.913293 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.913302 | orchestrator | 2025-09-19 11:33:56.913378 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-19 11:33:56.913388 | orchestrator | Friday 19 September 2025 11:29:52 +0000 (0:00:01.218) 0:02:26.321 ****** 2025-09-19 11:33:56.913397 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.913405 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.913414 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.913422 | orchestrator | 2025-09-19 11:33:56.913431 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-19 11:33:56.913440 | orchestrator | Friday 19 September 2025 11:29:54 +0000 (0:00:01.953) 0:02:28.274 ****** 2025-09-19 11:33:56.913449 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.913458 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.913468 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.913477 | orchestrator | 2025-09-19 11:33:56.913487 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-19 11:33:56.913496 | orchestrator | Friday 19 September 2025 11:29:54 +0000 (0:00:00.484) 0:02:28.758 ****** 2025-09-19 11:33:56.913506 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.913515 | orchestrator | 2025-09-19 11:33:56.913525 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-19 11:33:56.913535 | orchestrator | Friday 19 September 2025 11:29:55 +0000 (0:00:00.852) 0:02:29.611 ****** 2025-09-19 11:33:56.913550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:33:56.913562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:33:56.913573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:33:56.913584 | orchestrator | 2025-09-19 11:33:56.913598 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-19 11:33:56.913700 | orchestrator | Friday 19 September 2025 11:29:58 +0000 (0:00:02.949) 0:02:32.561 ****** 2025-09-19 11:33:56.913711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:33:56.913740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:33:56.913750 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.913760 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.913770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:33:56.913780 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.913790 | orchestrator | 2025-09-19 11:33:56.913805 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-19 11:33:56.913815 | orchestrator | Friday 19 September 2025 11:29:58 +0000 (0:00:00.550) 0:02:33.111 ****** 2025-09-19 11:33:56.913823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 11:33:56.913878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 11:33:56.913888 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.913897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 11:33:56.913905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 11:33:56.913914 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.913923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 11:33:56.913931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 11:33:56.913946 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.913954 | orchestrator | 2025-09-19 11:33:56.913963 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-19 11:33:56.913972 | orchestrator | Friday 19 September 2025 11:29:59 +0000 (0:00:00.620) 0:02:33.732 ****** 2025-09-19 11:33:56.913980 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.913989 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.913997 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.914006 | orchestrator | 2025-09-19 11:33:56.915165 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-19 11:33:56.915194 | orchestrator | Friday 19 September 2025 11:30:00 +0000 (0:00:01.331) 0:02:35.063 ****** 2025-09-19 11:33:56.915203 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.915211 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.915219 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.915227 | orchestrator | 2025-09-19 11:33:56.915235 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-19 11:33:56.915243 | orchestrator | Friday 19 September 2025 11:30:02 +0000 (0:00:01.967) 0:02:37.031 ****** 2025-09-19 11:33:56.915251 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.915259 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.915267 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.915275 | orchestrator | 2025-09-19 11:33:56.915283 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-19 11:33:56.915290 | orchestrator | Friday 19 September 2025 11:30:03 +0000 (0:00:00.530) 0:02:37.561 ****** 2025-09-19 11:33:56.915320 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.915328 | orchestrator | 2025-09-19 11:33:56.915336 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-19 11:33:56.915344 | orchestrator | Friday 19 September 2025 11:30:04 +0000 (0:00:00.853) 0:02:38.414 ****** 2025-09-19 11:33:56.915369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:33:56.915397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:33:56.915413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:33:56.915427 | orchestrator | 2025-09-19 11:33:56.915467 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-19 11:33:56.915475 | orchestrator | Friday 19 September 2025 11:30:07 +0000 (0:00:03.761) 0:02:42.176 ****** 2025-09-19 11:33:56.915516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:33:56.915526 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.915539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:33:56.915553 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.915567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:33:56.915576 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.915584 | orchestrator | 2025-09-19 11:33:56.915592 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-19 11:33:56.915600 | orchestrator | Friday 19 September 2025 11:30:09 +0000 (0:00:01.080) 0:02:43.256 ****** 2025-09-19 11:33:56.915611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 11:33:56.915621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 11:33:56.915636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 11:33:56.915644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 11:33:56.915653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 11:33:56.915661 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.915670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 11:33:56.915681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 11:33:56.915690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 11:33:56.915698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 11:33:56.915706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 11:33:56.915714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 11:33:56.915722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 11:33:56.915730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 11:33:56.915738 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.915754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 11:33:56.915762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 11:33:56.915770 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.915777 | orchestrator | 2025-09-19 11:33:56.915785 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-19 11:33:56.915793 | orchestrator | Friday 19 September 2025 11:30:10 +0000 (0:00:01.164) 0:02:44.421 ****** 2025-09-19 11:33:56.915801 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.915809 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.915816 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.915824 | orchestrator | 2025-09-19 11:33:56.915966 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-19 11:33:56.915977 | orchestrator | Friday 19 September 2025 11:30:11 +0000 (0:00:01.322) 0:02:45.743 ****** 2025-09-19 11:33:56.915985 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.915993 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.916001 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.916008 | orchestrator | 2025-09-19 11:33:56.916016 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-19 11:33:56.916024 | orchestrator | Friday 19 September 2025 11:30:13 +0000 (0:00:02.023) 0:02:47.766 ****** 2025-09-19 11:33:56.916032 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.916039 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.916047 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.916055 | orchestrator | 2025-09-19 11:33:56.916063 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-19 11:33:56.916070 | orchestrator | Friday 19 September 2025 11:30:14 +0000 (0:00:00.503) 0:02:48.270 ****** 2025-09-19 11:33:56.916078 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.916086 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.916093 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.916101 | orchestrator | 2025-09-19 11:33:56.916108 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-19 11:33:56.916116 | orchestrator | Friday 19 September 2025 11:30:14 +0000 (0:00:00.283) 0:02:48.553 ****** 2025-09-19 11:33:56.916124 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.916132 | orchestrator | 2025-09-19 11:33:56.916140 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-19 11:33:56.916151 | orchestrator | Friday 19 September 2025 11:30:15 +0000 (0:00:00.910) 0:02:49.463 ****** 2025-09-19 11:33:56.916159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:33:56.916173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:33:56.916181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:33:56.916192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:33:56.916200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:33:56.916211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:33:56.916218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:33:56.916231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:33:56.916241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:33:56.916248 | orchestrator | 2025-09-19 11:33:56.916255 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-19 11:33:56.916262 | orchestrator | Friday 19 September 2025 11:30:18 +0000 (0:00:03.344) 0:02:52.808 ****** 2025-09-19 11:33:56.916269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:33:56.916280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:33:56.916287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:33:56.916307 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.916314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:33:56.916324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:33:56.916331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:33:56.916338 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.916349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:33:56.916357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:33:56.916368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:33:56.916374 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.916381 | orchestrator | 2025-09-19 11:33:56.916388 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-19 11:33:56.916394 | orchestrator | Friday 19 September 2025 11:30:19 +0000 (0:00:00.598) 0:02:53.407 ****** 2025-09-19 11:33:56.916402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 11:33:56.916412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 11:33:56.916419 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.916426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 11:33:56.916433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 11:33:56.916439 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.916446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 11:33:56.916453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 11:33:56.916459 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.916466 | orchestrator | 2025-09-19 11:33:56.916472 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-19 11:33:56.916479 | orchestrator | Friday 19 September 2025 11:30:19 +0000 (0:00:00.790) 0:02:54.197 ****** 2025-09-19 11:33:56.916485 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.916492 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.916498 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.916505 | orchestrator | 2025-09-19 11:33:56.916511 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-19 11:33:56.916525 | orchestrator | Friday 19 September 2025 11:30:21 +0000 (0:00:01.496) 0:02:55.694 ****** 2025-09-19 11:33:56.916532 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.916538 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.916545 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.916552 | orchestrator | 2025-09-19 11:33:56.916558 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-19 11:33:56.916565 | orchestrator | Friday 19 September 2025 11:30:23 +0000 (0:00:02.093) 0:02:57.787 ****** 2025-09-19 11:33:56.916571 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.916578 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.916584 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.916591 | orchestrator | 2025-09-19 11:33:56.916597 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-19 11:33:56.916616 | orchestrator | Friday 19 September 2025 11:30:23 +0000 (0:00:00.288) 0:02:58.076 ****** 2025-09-19 11:33:56.916623 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.916630 | orchestrator | 2025-09-19 11:33:56.916636 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-19 11:33:56.916643 | orchestrator | Friday 19 September 2025 11:30:24 +0000 (0:00:00.920) 0:02:58.996 ****** 2025-09-19 11:33:56.916650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:33:56.916662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.916669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:33:56.916684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.916692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:33:56.916699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.916706 | orchestrator | 2025-09-19 11:33:56.916713 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-19 11:33:56.916719 | orchestrator | Friday 19 September 2025 11:30:28 +0000 (0:00:03.361) 0:03:02.358 ****** 2025-09-19 11:33:56.916737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:33:56.916745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.916756 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.916768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:33:56.916776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.916782 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.916790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:33:56.916799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.916806 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.916859 | orchestrator | 2025-09-19 11:33:56.916867 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-19 11:33:56.916874 | orchestrator | Friday 19 September 2025 11:30:28 +0000 (0:00:00.570) 0:03:02.929 ****** 2025-09-19 11:33:56.916881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 11:33:56.916888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 11:33:56.916895 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.916901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 11:33:56.916908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 11:33:56.916915 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.916924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 11:33:56.916931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 11:33:56.916938 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.916945 | orchestrator | 2025-09-19 11:33:56.916951 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-19 11:33:56.916958 | orchestrator | Friday 19 September 2025 11:30:29 +0000 (0:00:00.815) 0:03:03.744 ****** 2025-09-19 11:33:56.916964 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.916971 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.916978 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.916984 | orchestrator | 2025-09-19 11:33:56.916991 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-19 11:33:56.916997 | orchestrator | Friday 19 September 2025 11:30:30 +0000 (0:00:01.418) 0:03:05.163 ****** 2025-09-19 11:33:56.917004 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.917010 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.917017 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.917023 | orchestrator | 2025-09-19 11:33:56.917030 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-19 11:33:56.917036 | orchestrator | Friday 19 September 2025 11:30:33 +0000 (0:00:02.168) 0:03:07.331 ****** 2025-09-19 11:33:56.917043 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.917049 | orchestrator | 2025-09-19 11:33:56.917056 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-19 11:33:56.917063 | orchestrator | Friday 19 September 2025 11:30:34 +0000 (0:00:01.059) 0:03:08.390 ****** 2025-09-19 11:33:56.917070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 11:33:56.917084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 11:33:56.917092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 11:33:56.917156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917177 | orchestrator | 2025-09-19 11:33:56.917184 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-19 11:33:56.917190 | orchestrator | Friday 19 September 2025 11:30:37 +0000 (0:00:03.381) 0:03:11.772 ****** 2025-09-19 11:33:56.917197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 11:33:56.917213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917238 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.917244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 11:33:56.917251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917279 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.917286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 11:33:56.917609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.917643 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.917650 | orchestrator | 2025-09-19 11:33:56.917657 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-19 11:33:56.917664 | orchestrator | Friday 19 September 2025 11:30:38 +0000 (0:00:00.833) 0:03:12.606 ****** 2025-09-19 11:33:56.917671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 11:33:56.917678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 11:33:56.917684 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.917695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 11:33:56.917702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 11:33:56.917709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 11:33:56.917716 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.917722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 11:33:56.917729 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.917736 | orchestrator | 2025-09-19 11:33:56.917742 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-19 11:33:56.917749 | orchestrator | Friday 19 September 2025 11:30:39 +0000 (0:00:00.879) 0:03:13.485 ****** 2025-09-19 11:33:56.917756 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.917762 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.917769 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.917775 | orchestrator | 2025-09-19 11:33:56.917782 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-19 11:33:56.917789 | orchestrator | Friday 19 September 2025 11:30:40 +0000 (0:00:01.438) 0:03:14.923 ****** 2025-09-19 11:33:56.917795 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.917802 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.917808 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.917815 | orchestrator | 2025-09-19 11:33:56.917821 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-19 11:33:56.917828 | orchestrator | Friday 19 September 2025 11:30:42 +0000 (0:00:02.003) 0:03:16.927 ****** 2025-09-19 11:33:56.917851 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.917857 | orchestrator | 2025-09-19 11:33:56.917864 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-19 11:33:56.917888 | orchestrator | Friday 19 September 2025 11:30:44 +0000 (0:00:01.338) 0:03:18.265 ****** 2025-09-19 11:33:56.917896 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 11:33:56.917902 | orchestrator | 2025-09-19 11:33:56.917909 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-19 11:33:56.917916 | orchestrator | Friday 19 September 2025 11:30:46 +0000 (0:00:02.930) 0:03:21.196 ****** 2025-09-19 11:33:56.917928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:33:56.917939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 11:33:56.917946 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.917969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:33:56.917983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 11:33:56.917990 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.918000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:33:56.918008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 11:33:56.918051 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.918061 | orchestrator | 2025-09-19 11:33:56.918068 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-19 11:33:56.918074 | orchestrator | Friday 19 September 2025 11:30:49 +0000 (0:00:02.034) 0:03:23.230 ****** 2025-09-19 11:33:56.918100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:33:56.918114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 11:33:56.918121 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.918131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:33:56.918158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 11:33:56.918166 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.918177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:33:56.918184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 11:33:56.918191 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.918198 | orchestrator | 2025-09-19 11:33:56.918205 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-19 11:33:56.918211 | orchestrator | Friday 19 September 2025 11:30:51 +0000 (0:00:02.413) 0:03:25.643 ****** 2025-09-19 11:33:56.918219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 11:33:56.918249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 11:33:56.918258 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.918266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 11:33:56.918274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 11:33:56.918281 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.918289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 11:33:56.918297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 11:33:56.918304 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.918312 | orchestrator | 2025-09-19 11:33:56.918320 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-19 11:33:56.918327 | orchestrator | Friday 19 September 2025 11:30:53 +0000 (0:00:02.473) 0:03:28.117 ****** 2025-09-19 11:33:56.918335 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.918342 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.918350 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.918365 | orchestrator | 2025-09-19 11:33:56.918373 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-19 11:33:56.918381 | orchestrator | Friday 19 September 2025 11:30:56 +0000 (0:00:02.218) 0:03:30.335 ****** 2025-09-19 11:33:56.918388 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.918396 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.918403 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.918411 | orchestrator | 2025-09-19 11:33:56.918418 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-19 11:33:56.918426 | orchestrator | Friday 19 September 2025 11:30:57 +0000 (0:00:01.632) 0:03:31.968 ****** 2025-09-19 11:33:56.918433 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.918441 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.918448 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.918455 | orchestrator | 2025-09-19 11:33:56.918477 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-19 11:33:56.918485 | orchestrator | Friday 19 September 2025 11:30:58 +0000 (0:00:00.648) 0:03:32.617 ****** 2025-09-19 11:33:56.918493 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.918534 | orchestrator | 2025-09-19 11:33:56.918542 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-19 11:33:56.918608 | orchestrator | Friday 19 September 2025 11:30:59 +0000 (0:00:01.125) 0:03:33.743 ****** 2025-09-19 11:33:56.918618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 11:33:56.918626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 11:33:56.918633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 11:33:56.918640 | orchestrator | 2025-09-19 11:33:56.918651 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-19 11:33:56.918658 | orchestrator | Friday 19 September 2025 11:31:01 +0000 (0:00:01.526) 0:03:35.270 ****** 2025-09-19 11:33:56.918670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 11:33:56.918677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 11:33:56.918684 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.918751 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.918777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 11:33:56.918785 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.918792 | orchestrator | 2025-09-19 11:33:56.918799 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-19 11:33:56.918806 | orchestrator | Friday 19 September 2025 11:31:01 +0000 (0:00:00.827) 0:03:36.097 ****** 2025-09-19 11:33:56.918813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 11:33:56.919403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 11:33:56.919429 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.919435 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.919441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 11:33:56.919447 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.919453 | orchestrator | 2025-09-19 11:33:56.919459 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-19 11:33:56.919472 | orchestrator | Friday 19 September 2025 11:31:02 +0000 (0:00:00.669) 0:03:36.766 ****** 2025-09-19 11:33:56.919478 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.919483 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.919557 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.919563 | orchestrator | 2025-09-19 11:33:56.919569 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-19 11:33:56.919575 | orchestrator | Friday 19 September 2025 11:31:03 +0000 (0:00:00.451) 0:03:37.218 ****** 2025-09-19 11:33:56.919585 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.919716 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.919724 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.919730 | orchestrator | 2025-09-19 11:33:56.919736 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-19 11:33:56.919741 | orchestrator | Friday 19 September 2025 11:31:04 +0000 (0:00:01.623) 0:03:38.842 ****** 2025-09-19 11:33:56.919747 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.919753 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.919758 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.919764 | orchestrator | 2025-09-19 11:33:56.919770 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-19 11:33:56.919775 | orchestrator | Friday 19 September 2025 11:31:05 +0000 (0:00:00.723) 0:03:39.566 ****** 2025-09-19 11:33:56.919781 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.919786 | orchestrator | 2025-09-19 11:33:56.919792 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-19 11:33:56.919798 | orchestrator | Friday 19 September 2025 11:31:06 +0000 (0:00:01.310) 0:03:40.876 ****** 2025-09-19 11:33:56.919804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:33:56.919875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.919885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.919899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.919908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 11:33:56.919914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.919934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.919941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.919948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.919958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:33:56.919966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.919972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 11:33:56.919978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.919997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 11:33:56.920014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:33:56.920022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:33:56.920048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 11:33:56.920079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:33:56.920104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:33:56.920168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 11:33:56.920174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 11:33:56.920254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:33:56.920273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:33:56.920284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 11:33:56.920339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:33:56.920345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920351 | orchestrator | 2025-09-19 11:33:56.920357 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-19 11:33:56.920363 | orchestrator | Friday 19 September 2025 11:31:13 +0000 (0:00:06.422) 0:03:47.299 ****** 2025-09-19 11:33:56.920372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:33:56.920378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 11:33:56.920422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:33:56.920446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 11:33:56.920516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:33:56.920530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:33:56.920622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 11:33:56.920670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:33:56.920735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 11:33:56.920766 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.920785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:33:56.920792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:33:56.920798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920804 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.920813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 11:33:56.920874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:33:56.920924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 11:33:56.920945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 11:33:56.920963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:33:56.920970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.920976 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.920982 | orchestrator | 2025-09-19 11:33:56.920987 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-19 11:33:56.920993 | orchestrator | Friday 19 September 2025 11:31:14 +0000 (0:00:01.702) 0:03:49.001 ****** 2025-09-19 11:33:56.920999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 11:33:56.921026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 11:33:56.921033 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.921039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 11:33:56.921045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 11:33:56.921051 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.921060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 11:33:56.921069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 11:33:56.921075 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.921081 | orchestrator | 2025-09-19 11:33:56.921086 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-19 11:33:56.921092 | orchestrator | Friday 19 September 2025 11:31:16 +0000 (0:00:01.736) 0:03:50.738 ****** 2025-09-19 11:33:56.921271 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.921278 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.921284 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.921290 | orchestrator | 2025-09-19 11:33:56.921295 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-19 11:33:56.921301 | orchestrator | Friday 19 September 2025 11:31:18 +0000 (0:00:02.155) 0:03:52.893 ****** 2025-09-19 11:33:56.921307 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.921312 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.921318 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.921324 | orchestrator | 2025-09-19 11:33:56.921330 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-19 11:33:56.921335 | orchestrator | Friday 19 September 2025 11:31:20 +0000 (0:00:02.186) 0:03:55.080 ****** 2025-09-19 11:33:56.921341 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.921347 | orchestrator | 2025-09-19 11:33:56.921352 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-19 11:33:56.921358 | orchestrator | Friday 19 September 2025 11:31:22 +0000 (0:00:01.297) 0:03:56.378 ****** 2025-09-19 11:33:56.921377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.921385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.921391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.921403 | orchestrator | 2025-09-19 11:33:56.921409 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-19 11:33:56.921414 | orchestrator | Friday 19 September 2025 11:31:25 +0000 (0:00:03.587) 0:03:59.965 ****** 2025-09-19 11:33:56.921423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.921429 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.921448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.921454 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.921460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.921466 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.921476 | orchestrator | 2025-09-19 11:33:56.921482 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-19 11:33:56.921488 | orchestrator | Friday 19 September 2025 11:31:26 +0000 (0:00:00.970) 0:04:00.936 ****** 2025-09-19 11:33:56.921494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 11:33:56.921500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 11:33:56.921506 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.921512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 11:33:56.921518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 11:33:56.921524 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.921529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 11:33:56.921538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 11:33:56.921544 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.921549 | orchestrator | 2025-09-19 11:33:56.921555 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-19 11:33:56.921561 | orchestrator | Friday 19 September 2025 11:31:27 +0000 (0:00:00.798) 0:04:01.735 ****** 2025-09-19 11:33:56.921566 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.921572 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.921578 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.921583 | orchestrator | 2025-09-19 11:33:56.921589 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-19 11:33:56.921595 | orchestrator | Friday 19 September 2025 11:31:28 +0000 (0:00:01.259) 0:04:02.994 ****** 2025-09-19 11:33:56.921600 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.921606 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.921612 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.921617 | orchestrator | 2025-09-19 11:33:56.921623 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-19 11:33:56.921629 | orchestrator | Friday 19 September 2025 11:31:31 +0000 (0:00:02.237) 0:04:05.231 ****** 2025-09-19 11:33:56.921635 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.921640 | orchestrator | 2025-09-19 11:33:56.921646 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-19 11:33:56.921652 | orchestrator | Friday 19 September 2025 11:31:32 +0000 (0:00:01.603) 0:04:06.835 ****** 2025-09-19 11:33:56.921671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.921683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.921690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.921701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.921708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.921726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.921775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.921783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.921792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.921798 | orchestrator | 2025-09-19 11:33:56.921804 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-19 11:33:56.921810 | orchestrator | Friday 19 September 2025 11:31:37 +0000 (0:00:04.385) 0:04:11.220 ****** 2025-09-19 11:33:56.921845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.921858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.921864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.921870 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.921879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.921885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.921891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.921900 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.921921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.921929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.921936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.921942 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.921949 | orchestrator | 2025-09-19 11:33:56.921958 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-19 11:33:56.922249 | orchestrator | Friday 19 September 2025 11:31:37 +0000 (0:00:00.678) 0:04:11.899 ****** 2025-09-19 11:33:56.922260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 11:33:56.922268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 11:33:56.922275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 11:33:56.922282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 11:33:56.922293 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.922299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 11:33:56.922305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 11:33:56.922311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 11:33:56.922337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 11:33:56.922344 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.922350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 11:33:56.922356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 11:33:56.922361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 11:33:56.922367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 11:33:56.922373 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.922379 | orchestrator | 2025-09-19 11:33:56.922384 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-19 11:33:56.922390 | orchestrator | Friday 19 September 2025 11:31:39 +0000 (0:00:01.384) 0:04:13.284 ****** 2025-09-19 11:33:56.922396 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.922401 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.922407 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.922412 | orchestrator | 2025-09-19 11:33:56.922418 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-19 11:33:56.922424 | orchestrator | Friday 19 September 2025 11:31:40 +0000 (0:00:01.486) 0:04:14.770 ****** 2025-09-19 11:33:56.922429 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.922435 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.922440 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.922446 | orchestrator | 2025-09-19 11:33:56.922452 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-19 11:33:56.922457 | orchestrator | Friday 19 September 2025 11:31:42 +0000 (0:00:02.329) 0:04:17.100 ****** 2025-09-19 11:33:56.922463 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.922469 | orchestrator | 2025-09-19 11:33:56.922475 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-19 11:33:56.922480 | orchestrator | Friday 19 September 2025 11:31:44 +0000 (0:00:01.710) 0:04:18.811 ****** 2025-09-19 11:33:56.922486 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-19 11:33:56.922492 | orchestrator | 2025-09-19 11:33:56.922497 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-19 11:33:56.922507 | orchestrator | Friday 19 September 2025 11:31:45 +0000 (0:00:00.895) 0:04:19.706 ****** 2025-09-19 11:33:56.922514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 11:33:56.922520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 11:33:56.922526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 11:33:56.922532 | orchestrator | 2025-09-19 11:33:56.922538 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-19 11:33:56.922593 | orchestrator | Friday 19 September 2025 11:31:49 +0000 (0:00:04.329) 0:04:24.036 ****** 2025-09-19 11:33:56.922602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:33:56.922608 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.922628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:33:56.922634 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.922640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:33:56.922646 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.922760 | orchestrator | 2025-09-19 11:33:56.922769 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-19 11:33:56.922775 | orchestrator | Friday 19 September 2025 11:31:51 +0000 (0:00:01.483) 0:04:25.519 ****** 2025-09-19 11:33:56.922781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 11:33:56.922792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 11:33:56.922798 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.922807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 11:33:56.922813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 11:33:56.922819 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.922824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 11:33:56.922878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 11:33:56.922885 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.922891 | orchestrator | 2025-09-19 11:33:56.922897 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 11:33:56.922903 | orchestrator | Friday 19 September 2025 11:31:52 +0000 (0:00:01.605) 0:04:27.125 ****** 2025-09-19 11:33:56.922908 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.922914 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.922920 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.922925 | orchestrator | 2025-09-19 11:33:56.922931 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 11:33:56.922936 | orchestrator | Friday 19 September 2025 11:31:55 +0000 (0:00:02.374) 0:04:29.500 ****** 2025-09-19 11:33:56.922942 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.922948 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.922953 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.922959 | orchestrator | 2025-09-19 11:33:56.922965 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-19 11:33:56.922992 | orchestrator | Friday 19 September 2025 11:31:57 +0000 (0:00:02.685) 0:04:32.185 ****** 2025-09-19 11:33:56.922999 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-19 11:33:56.923005 | orchestrator | 2025-09-19 11:33:56.923011 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-19 11:33:56.923016 | orchestrator | Friday 19 September 2025 11:31:59 +0000 (0:00:01.173) 0:04:33.359 ****** 2025-09-19 11:33:56.923022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:33:56.923029 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.923034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:33:56.923048 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.923053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:33:56.923059 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.923065 | orchestrator | 2025-09-19 11:33:56.923071 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-19 11:33:56.923076 | orchestrator | Friday 19 September 2025 11:32:00 +0000 (0:00:01.138) 0:04:34.497 ****** 2025-09-19 11:33:56.923085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:33:56.923091 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.923097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:33:56.923103 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.923108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 11:33:56.923114 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.923120 | orchestrator | 2025-09-19 11:33:56.923126 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-19 11:33:56.923147 | orchestrator | Friday 19 September 2025 11:32:01 +0000 (0:00:01.107) 0:04:35.604 ****** 2025-09-19 11:33:56.923154 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.923160 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.923166 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.923171 | orchestrator | 2025-09-19 11:33:56.923177 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 11:33:56.923183 | orchestrator | Friday 19 September 2025 11:32:02 +0000 (0:00:01.583) 0:04:37.187 ****** 2025-09-19 11:33:56.923193 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.923198 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.923204 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.923210 | orchestrator | 2025-09-19 11:33:56.923215 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 11:33:56.923221 | orchestrator | Friday 19 September 2025 11:32:05 +0000 (0:00:02.318) 0:04:39.506 ****** 2025-09-19 11:33:56.923227 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.923233 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.923239 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.923244 | orchestrator | 2025-09-19 11:33:56.923250 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-19 11:33:56.923256 | orchestrator | Friday 19 September 2025 11:32:08 +0000 (0:00:02.943) 0:04:42.449 ****** 2025-09-19 11:33:56.923262 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-19 11:33:56.923267 | orchestrator | 2025-09-19 11:33:56.923273 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-19 11:33:56.923279 | orchestrator | Friday 19 September 2025 11:32:09 +0000 (0:00:00.902) 0:04:43.352 ****** 2025-09-19 11:33:56.923285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 11:33:56.923291 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.923297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 11:33:56.923303 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.923314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 11:33:56.923320 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.923326 | orchestrator | 2025-09-19 11:33:56.923332 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-19 11:33:56.923337 | orchestrator | Friday 19 September 2025 11:32:10 +0000 (0:00:01.429) 0:04:44.781 ****** 2025-09-19 11:33:56.923343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 11:33:56.923349 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.923372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 11:33:56.923378 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.923383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 11:33:56.923389 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.923394 | orchestrator | 2025-09-19 11:33:56.923399 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-19 11:33:56.923404 | orchestrator | Friday 19 September 2025 11:32:12 +0000 (0:00:01.604) 0:04:46.385 ****** 2025-09-19 11:33:56.923410 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.923415 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.923421 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.923426 | orchestrator | 2025-09-19 11:33:56.923432 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 11:33:56.923437 | orchestrator | Friday 19 September 2025 11:32:13 +0000 (0:00:01.380) 0:04:47.766 ****** 2025-09-19 11:33:56.923443 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.923448 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.923454 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.923460 | orchestrator | 2025-09-19 11:33:56.923465 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 11:33:56.923471 | orchestrator | Friday 19 September 2025 11:32:15 +0000 (0:00:02.224) 0:04:49.990 ****** 2025-09-19 11:33:56.923476 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.923482 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.923487 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.923493 | orchestrator | 2025-09-19 11:33:56.923499 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-19 11:33:56.923504 | orchestrator | Friday 19 September 2025 11:32:18 +0000 (0:00:02.784) 0:04:52.775 ****** 2025-09-19 11:33:56.923510 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.923515 | orchestrator | 2025-09-19 11:33:56.923521 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-19 11:33:56.923526 | orchestrator | Friday 19 September 2025 11:32:20 +0000 (0:00:01.616) 0:04:54.391 ****** 2025-09-19 11:33:56.923535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.923545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:33:56.923552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.923573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.923580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.923585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.923593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:33:56.923602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.923608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.923628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.923634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.923639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:33:56.923647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.923656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.923661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.923667 | orchestrator | 2025-09-19 11:33:56.923672 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-19 11:33:56.923677 | orchestrator | Friday 19 September 2025 11:32:23 +0000 (0:00:03.423) 0:04:57.814 ****** 2025-09-19 11:33:56.923697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.923703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:33:56.923708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.923716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.923724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.923729 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.923748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.923754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:33:56.923760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.923765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.923772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.923780 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.923786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.923791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:33:56.923810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.923816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:33:56.923822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:33:56.923827 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.923844 | orchestrator | 2025-09-19 11:33:56.923853 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-19 11:33:56.923858 | orchestrator | Friday 19 September 2025 11:32:24 +0000 (0:00:00.927) 0:04:58.741 ****** 2025-09-19 11:33:56.923863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 11:33:56.923869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 11:33:56.923877 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.923882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 11:33:56.923887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 11:33:56.923892 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.923897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 11:33:56.923902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 11:33:56.923907 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.923912 | orchestrator | 2025-09-19 11:33:56.923918 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-19 11:33:56.923923 | orchestrator | Friday 19 September 2025 11:32:26 +0000 (0:00:01.498) 0:05:00.240 ****** 2025-09-19 11:33:56.923928 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.923933 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.923938 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.923943 | orchestrator | 2025-09-19 11:33:56.923948 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-19 11:33:56.923953 | orchestrator | Friday 19 September 2025 11:32:27 +0000 (0:00:01.453) 0:05:01.694 ****** 2025-09-19 11:33:56.923958 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.923963 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.923968 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.923973 | orchestrator | 2025-09-19 11:33:56.923978 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-19 11:33:56.923983 | orchestrator | Friday 19 September 2025 11:32:29 +0000 (0:00:01.962) 0:05:03.656 ****** 2025-09-19 11:33:56.924003 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.924009 | orchestrator | 2025-09-19 11:33:56.924014 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-19 11:33:56.924019 | orchestrator | Friday 19 September 2025 11:32:30 +0000 (0:00:01.482) 0:05:05.139 ****** 2025-09-19 11:33:56.924024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:33:56.924044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:33:56.924054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:33:56.924060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:33:56.924080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:33:56.924091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:33:56.924096 | orchestrator | 2025-09-19 11:33:56.924102 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-19 11:33:56.924107 | orchestrator | Friday 19 September 2025 11:32:35 +0000 (0:00:04.936) 0:05:10.076 ****** 2025-09-19 11:33:56.924115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:33:56.924120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:33:56.924126 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.924145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:33:56.924155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:33:56.924161 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.924169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:33:56.924174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:33:56.924180 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.924185 | orchestrator | 2025-09-19 11:33:56.924190 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-19 11:33:56.924195 | orchestrator | Friday 19 September 2025 11:32:36 +0000 (0:00:00.566) 0:05:10.642 ****** 2025-09-19 11:33:56.924214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 11:33:56.924220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 11:33:56.924229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 11:33:56.924235 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.924240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 11:33:56.924245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 11:33:56.924250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 11:33:56.924255 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.924260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 11:33:56.924265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 11:33:56.924270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 11:33:56.924275 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.924280 | orchestrator | 2025-09-19 11:33:56.924285 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-19 11:33:56.924290 | orchestrator | Friday 19 September 2025 11:32:37 +0000 (0:00:01.468) 0:05:12.111 ****** 2025-09-19 11:33:56.924295 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.924300 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.924308 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.924313 | orchestrator | 2025-09-19 11:33:56.924318 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-19 11:33:56.924323 | orchestrator | Friday 19 September 2025 11:32:38 +0000 (0:00:00.398) 0:05:12.509 ****** 2025-09-19 11:33:56.924328 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.924333 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.924338 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.924343 | orchestrator | 2025-09-19 11:33:56.924348 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-19 11:33:56.924353 | orchestrator | Friday 19 September 2025 11:32:39 +0000 (0:00:01.188) 0:05:13.697 ****** 2025-09-19 11:33:56.924358 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.924363 | orchestrator | 2025-09-19 11:33:56.924368 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-19 11:33:56.924373 | orchestrator | Friday 19 September 2025 11:32:41 +0000 (0:00:01.722) 0:05:15.421 ****** 2025-09-19 11:33:56.924378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:33:56.924401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:33:56.924408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:33:56.924427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:33:56.924432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:33:56.924438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:33:56.924468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:33:56.924478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:33:56.924487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:33:56.924520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:33:56.924527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:33:56.924535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 11:33:56.924541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 11:33:56.924550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:33:56.924579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:33:56.924587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:33:56.924602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 11:33:56.924608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:33:56.924627 | orchestrator | 2025-09-19 11:33:56.924633 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-19 11:33:56.924638 | orchestrator | Friday 19 September 2025 11:32:45 +0000 (0:00:04.142) 0:05:19.563 ****** 2025-09-19 11:33:56.924646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 11:33:56.924655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:33:56.924660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:33:56.924679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 11:33:56.924687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 11:33:56.924697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 11:33:56.924702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:33:56.924710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:33:56.924740 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.924748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:33:56.924753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 11:33:56.924762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 11:33:56.924767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 11:33:56.924789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:33:56.924794 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.924800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:33:56.924807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:33:56.924823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 11:33:56.924851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 11:33:56.924857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:33:56.924870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:33:56.924875 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.924880 | orchestrator | 2025-09-19 11:33:56.924886 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-19 11:33:56.924891 | orchestrator | Friday 19 September 2025 11:32:46 +0000 (0:00:00.872) 0:05:20.435 ****** 2025-09-19 11:33:56.924896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 11:33:56.924901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 11:33:56.924907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 11:33:56.924916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 11:33:56.924921 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.924926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 11:33:56.924931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 11:33:56.924939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 11:33:56.924945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 11:33:56.924950 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.924955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 11:33:56.924960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 11:33:56.924965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 11:33:56.924971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 11:33:56.924976 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.924981 | orchestrator | 2025-09-19 11:33:56.924986 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-19 11:33:56.924991 | orchestrator | Friday 19 September 2025 11:32:47 +0000 (0:00:01.387) 0:05:21.822 ****** 2025-09-19 11:33:56.924998 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925003 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925008 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925013 | orchestrator | 2025-09-19 11:33:56.925018 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-19 11:33:56.925024 | orchestrator | Friday 19 September 2025 11:32:48 +0000 (0:00:00.503) 0:05:22.326 ****** 2025-09-19 11:33:56.925028 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925034 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925039 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925044 | orchestrator | 2025-09-19 11:33:56.925049 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-19 11:33:56.925053 | orchestrator | Friday 19 September 2025 11:32:49 +0000 (0:00:01.475) 0:05:23.802 ****** 2025-09-19 11:33:56.925058 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.925067 | orchestrator | 2025-09-19 11:33:56.925072 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-19 11:33:56.925077 | orchestrator | Friday 19 September 2025 11:32:51 +0000 (0:00:01.952) 0:05:25.755 ****** 2025-09-19 11:33:56.925083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:33:56.925091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:33:56.925097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 11:33:56.925102 | orchestrator | 2025-09-19 11:33:56.925108 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-19 11:33:56.925113 | orchestrator | Friday 19 September 2025 11:32:53 +0000 (0:00:02.278) 0:05:28.033 ****** 2025-09-19 11:33:56.925120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 11:33:56.925133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 11:33:56.925138 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925144 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 11:33:56.925157 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925162 | orchestrator | 2025-09-19 11:33:56.925167 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-19 11:33:56.925172 | orchestrator | Friday 19 September 2025 11:32:54 +0000 (0:00:00.456) 0:05:28.490 ****** 2025-09-19 11:33:56.925177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 11:33:56.925182 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 11:33:56.925192 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 11:33:56.925202 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925208 | orchestrator | 2025-09-19 11:33:56.925213 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-19 11:33:56.925218 | orchestrator | Friday 19 September 2025 11:32:54 +0000 (0:00:00.637) 0:05:29.127 ****** 2025-09-19 11:33:56.925227 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925232 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925237 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925242 | orchestrator | 2025-09-19 11:33:56.925247 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-19 11:33:56.925254 | orchestrator | Friday 19 September 2025 11:32:55 +0000 (0:00:00.982) 0:05:30.110 ****** 2025-09-19 11:33:56.925259 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925265 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925270 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925275 | orchestrator | 2025-09-19 11:33:56.925280 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-19 11:33:56.925285 | orchestrator | Friday 19 September 2025 11:32:57 +0000 (0:00:01.270) 0:05:31.380 ****** 2025-09-19 11:33:56.925290 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:33:56.925295 | orchestrator | 2025-09-19 11:33:56.925300 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-19 11:33:56.925305 | orchestrator | Friday 19 September 2025 11:32:58 +0000 (0:00:01.378) 0:05:32.759 ****** 2025-09-19 11:33:56.925310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.925318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.925324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.925335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.925341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.925346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 11:33:56.925351 | orchestrator | 2025-09-19 11:33:56.925357 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-19 11:33:56.925362 | orchestrator | Friday 19 September 2025 11:33:04 +0000 (0:00:06.045) 0:05:38.805 ****** 2025-09-19 11:33:56.925370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.925381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.925387 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.925397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.925402 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.925419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 11:33:56.925424 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925429 | orchestrator | 2025-09-19 11:33:56.925435 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-19 11:33:56.925442 | orchestrator | Friday 19 September 2025 11:33:05 +0000 (0:00:00.651) 0:05:39.456 ****** 2025-09-19 11:33:56.925447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 11:33:56.925452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 11:33:56.925458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 11:33:56.925463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 11:33:56.925468 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 11:33:56.925478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 11:33:56.925483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 11:33:56.925489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 11:33:56.925494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 11:33:56.925499 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 11:33:56.925512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 11:33:56.925521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 11:33:56.925526 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925531 | orchestrator | 2025-09-19 11:33:56.925536 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-19 11:33:56.925541 | orchestrator | Friday 19 September 2025 11:33:06 +0000 (0:00:00.835) 0:05:40.291 ****** 2025-09-19 11:33:56.925546 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.925551 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.925556 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.925561 | orchestrator | 2025-09-19 11:33:56.925566 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-19 11:33:56.925571 | orchestrator | Friday 19 September 2025 11:33:07 +0000 (0:00:01.806) 0:05:42.098 ****** 2025-09-19 11:33:56.925576 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.925581 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.925586 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.925591 | orchestrator | 2025-09-19 11:33:56.925596 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-19 11:33:56.925601 | orchestrator | Friday 19 September 2025 11:33:09 +0000 (0:00:01.936) 0:05:44.035 ****** 2025-09-19 11:33:56.925606 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925611 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925616 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925621 | orchestrator | 2025-09-19 11:33:56.925626 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-19 11:33:56.925631 | orchestrator | Friday 19 September 2025 11:33:10 +0000 (0:00:00.289) 0:05:44.324 ****** 2025-09-19 11:33:56.925636 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925641 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925646 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925651 | orchestrator | 2025-09-19 11:33:56.925656 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-19 11:33:56.925663 | orchestrator | Friday 19 September 2025 11:33:10 +0000 (0:00:00.286) 0:05:44.611 ****** 2025-09-19 11:33:56.925668 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925673 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925678 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925683 | orchestrator | 2025-09-19 11:33:56.925688 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-19 11:33:56.925693 | orchestrator | Friday 19 September 2025 11:33:10 +0000 (0:00:00.270) 0:05:44.881 ****** 2025-09-19 11:33:56.925698 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925703 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925708 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925713 | orchestrator | 2025-09-19 11:33:56.925718 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-19 11:33:56.925723 | orchestrator | Friday 19 September 2025 11:33:11 +0000 (0:00:00.506) 0:05:45.388 ****** 2025-09-19 11:33:56.925728 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925733 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925738 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925743 | orchestrator | 2025-09-19 11:33:56.925748 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-19 11:33:56.925753 | orchestrator | Friday 19 September 2025 11:33:11 +0000 (0:00:00.286) 0:05:45.674 ****** 2025-09-19 11:33:56.925758 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.925763 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.925768 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.925777 | orchestrator | 2025-09-19 11:33:56.925782 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-19 11:33:56.925787 | orchestrator | Friday 19 September 2025 11:33:11 +0000 (0:00:00.487) 0:05:46.162 ****** 2025-09-19 11:33:56.925792 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.925797 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.925802 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.925807 | orchestrator | 2025-09-19 11:33:56.925812 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-19 11:33:56.925817 | orchestrator | Friday 19 September 2025 11:33:12 +0000 (0:00:00.857) 0:05:47.020 ****** 2025-09-19 11:33:56.925822 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.925827 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.925843 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.925848 | orchestrator | 2025-09-19 11:33:56.925853 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-19 11:33:56.925858 | orchestrator | Friday 19 September 2025 11:33:13 +0000 (0:00:00.378) 0:05:47.398 ****** 2025-09-19 11:33:56.925863 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.925868 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.925873 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.925878 | orchestrator | 2025-09-19 11:33:56.925883 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-19 11:33:56.925888 | orchestrator | Friday 19 September 2025 11:33:14 +0000 (0:00:00.848) 0:05:48.246 ****** 2025-09-19 11:33:56.925893 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.925898 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.925903 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.925908 | orchestrator | 2025-09-19 11:33:56.925913 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-19 11:33:56.925918 | orchestrator | Friday 19 September 2025 11:33:14 +0000 (0:00:00.840) 0:05:49.087 ****** 2025-09-19 11:33:56.925923 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.925928 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.925933 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.925938 | orchestrator | 2025-09-19 11:33:56.925943 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-19 11:33:56.925951 | orchestrator | Friday 19 September 2025 11:33:15 +0000 (0:00:01.112) 0:05:50.200 ****** 2025-09-19 11:33:56.925956 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.925961 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.925966 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.925971 | orchestrator | 2025-09-19 11:33:56.925976 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-19 11:33:56.925981 | orchestrator | Friday 19 September 2025 11:33:25 +0000 (0:00:09.283) 0:05:59.484 ****** 2025-09-19 11:33:56.925986 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.925991 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.925996 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.926001 | orchestrator | 2025-09-19 11:33:56.926006 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-19 11:33:56.926011 | orchestrator | Friday 19 September 2025 11:33:26 +0000 (0:00:00.825) 0:06:00.310 ****** 2025-09-19 11:33:56.926050 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.926061 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.926069 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.926077 | orchestrator | 2025-09-19 11:33:56.926086 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-19 11:33:56.926094 | orchestrator | Friday 19 September 2025 11:33:38 +0000 (0:00:12.136) 0:06:12.446 ****** 2025-09-19 11:33:56.926103 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.926110 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.926115 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.926120 | orchestrator | 2025-09-19 11:33:56.926125 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-19 11:33:56.926130 | orchestrator | Friday 19 September 2025 11:33:39 +0000 (0:00:00.841) 0:06:13.288 ****** 2025-09-19 11:33:56.926140 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:33:56.926145 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:33:56.926150 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:33:56.926155 | orchestrator | 2025-09-19 11:33:56.926160 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-19 11:33:56.926165 | orchestrator | Friday 19 September 2025 11:33:48 +0000 (0:00:09.562) 0:06:22.851 ****** 2025-09-19 11:33:56.926170 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.926175 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.926180 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.926185 | orchestrator | 2025-09-19 11:33:56.926190 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-19 11:33:56.926195 | orchestrator | Friday 19 September 2025 11:33:49 +0000 (0:00:00.440) 0:06:23.291 ****** 2025-09-19 11:33:56.926200 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.926208 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.926213 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.926218 | orchestrator | 2025-09-19 11:33:56.926223 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-19 11:33:56.926228 | orchestrator | Friday 19 September 2025 11:33:49 +0000 (0:00:00.377) 0:06:23.668 ****** 2025-09-19 11:33:56.926233 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.926238 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.926243 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.926248 | orchestrator | 2025-09-19 11:33:56.926253 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-19 11:33:56.926258 | orchestrator | Friday 19 September 2025 11:33:49 +0000 (0:00:00.395) 0:06:24.063 ****** 2025-09-19 11:33:56.926263 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.926268 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.926273 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.926278 | orchestrator | 2025-09-19 11:33:56.926283 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-19 11:33:56.926288 | orchestrator | Friday 19 September 2025 11:33:50 +0000 (0:00:00.923) 0:06:24.987 ****** 2025-09-19 11:33:56.926293 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.926298 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.926303 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.926308 | orchestrator | 2025-09-19 11:33:56.926313 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-19 11:33:56.926318 | orchestrator | Friday 19 September 2025 11:33:51 +0000 (0:00:00.433) 0:06:25.420 ****** 2025-09-19 11:33:56.926323 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:33:56.926328 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:33:56.926333 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:33:56.926338 | orchestrator | 2025-09-19 11:33:56.926343 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-19 11:33:56.926348 | orchestrator | Friday 19 September 2025 11:33:51 +0000 (0:00:00.376) 0:06:25.797 ****** 2025-09-19 11:33:56.926353 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.926358 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.926363 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.926368 | orchestrator | 2025-09-19 11:33:56.926373 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-19 11:33:56.926378 | orchestrator | Friday 19 September 2025 11:33:53 +0000 (0:00:01.417) 0:06:27.214 ****** 2025-09-19 11:33:56.926383 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:33:56.926388 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:33:56.926393 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:33:56.926398 | orchestrator | 2025-09-19 11:33:56.926403 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:33:56.926408 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 11:33:56.926416 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 11:33:56.926422 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 11:33:56.926427 | orchestrator | 2025-09-19 11:33:56.926432 | orchestrator | 2025-09-19 11:33:56.926437 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:33:56.926442 | orchestrator | Friday 19 September 2025 11:33:54 +0000 (0:00:01.322) 0:06:28.537 ****** 2025-09-19 11:33:56.926447 | orchestrator | =============================================================================== 2025-09-19 11:33:56.926452 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.14s 2025-09-19 11:33:56.926457 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.56s 2025-09-19 11:33:56.926462 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.28s 2025-09-19 11:33:56.926467 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.42s 2025-09-19 11:33:56.926472 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.05s 2025-09-19 11:33:56.926477 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.23s 2025-09-19 11:33:56.926482 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.94s 2025-09-19 11:33:56.926487 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.72s 2025-09-19 11:33:56.926492 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.64s 2025-09-19 11:33:56.926497 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.39s 2025-09-19 11:33:56.926502 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.37s 2025-09-19 11:33:56.926507 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.33s 2025-09-19 11:33:56.926512 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.30s 2025-09-19 11:33:56.926516 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.14s 2025-09-19 11:33:56.926521 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.08s 2025-09-19 11:33:56.926526 | orchestrator | loadbalancer : Check loadbalancer containers ---------------------------- 4.05s 2025-09-19 11:33:56.926531 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.00s 2025-09-19 11:33:56.926536 | orchestrator | loadbalancer : Copying over haproxy.cfg --------------------------------- 3.96s 2025-09-19 11:33:56.926592 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.83s 2025-09-19 11:33:56.926606 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.76s 2025-09-19 11:33:56.926615 | orchestrator | 2025-09-19 11:33:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:33:59.962643 | orchestrator | 2025-09-19 11:33:59 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:33:59.963560 | orchestrator | 2025-09-19 11:33:59 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:33:59.967081 | orchestrator | 2025-09-19 11:33:59 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:33:59.967125 | orchestrator | 2025-09-19 11:33:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:03.020077 | orchestrator | 2025-09-19 11:34:03 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:03.020188 | orchestrator | 2025-09-19 11:34:03 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:03.021316 | orchestrator | 2025-09-19 11:34:03 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:03.022868 | orchestrator | 2025-09-19 11:34:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:06.116129 | orchestrator | 2025-09-19 11:34:06 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:06.116615 | orchestrator | 2025-09-19 11:34:06 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:06.117602 | orchestrator | 2025-09-19 11:34:06 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:06.117629 | orchestrator | 2025-09-19 11:34:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:09.155261 | orchestrator | 2025-09-19 11:34:09 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:09.155357 | orchestrator | 2025-09-19 11:34:09 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:09.156166 | orchestrator | 2025-09-19 11:34:09 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:09.156197 | orchestrator | 2025-09-19 11:34:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:12.198403 | orchestrator | 2025-09-19 11:34:12 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:12.198512 | orchestrator | 2025-09-19 11:34:12 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:12.199730 | orchestrator | 2025-09-19 11:34:12 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:12.199760 | orchestrator | 2025-09-19 11:34:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:15.244383 | orchestrator | 2025-09-19 11:34:15 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:15.246720 | orchestrator | 2025-09-19 11:34:15 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:15.248474 | orchestrator | 2025-09-19 11:34:15 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:15.248521 | orchestrator | 2025-09-19 11:34:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:18.283276 | orchestrator | 2025-09-19 11:34:18 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:18.283367 | orchestrator | 2025-09-19 11:34:18 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:18.283991 | orchestrator | 2025-09-19 11:34:18 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:18.284121 | orchestrator | 2025-09-19 11:34:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:21.330748 | orchestrator | 2025-09-19 11:34:21 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:21.331723 | orchestrator | 2025-09-19 11:34:21 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:21.332545 | orchestrator | 2025-09-19 11:34:21 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:21.332758 | orchestrator | 2025-09-19 11:34:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:24.379006 | orchestrator | 2025-09-19 11:34:24 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:24.379591 | orchestrator | 2025-09-19 11:34:24 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:24.380489 | orchestrator | 2025-09-19 11:34:24 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:24.380747 | orchestrator | 2025-09-19 11:34:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:27.414674 | orchestrator | 2025-09-19 11:34:27 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:27.415742 | orchestrator | 2025-09-19 11:34:27 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:27.416559 | orchestrator | 2025-09-19 11:34:27 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:27.417825 | orchestrator | 2025-09-19 11:34:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:30.530337 | orchestrator | 2025-09-19 11:34:30 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:30.530461 | orchestrator | 2025-09-19 11:34:30 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:30.530483 | orchestrator | 2025-09-19 11:34:30 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:30.530505 | orchestrator | 2025-09-19 11:34:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:33.577186 | orchestrator | 2025-09-19 11:34:33 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:33.578251 | orchestrator | 2025-09-19 11:34:33 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:33.579290 | orchestrator | 2025-09-19 11:34:33 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:33.579718 | orchestrator | 2025-09-19 11:34:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:36.623301 | orchestrator | 2025-09-19 11:34:36 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:36.625609 | orchestrator | 2025-09-19 11:34:36 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:36.627839 | orchestrator | 2025-09-19 11:34:36 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:36.628363 | orchestrator | 2025-09-19 11:34:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:39.684463 | orchestrator | 2025-09-19 11:34:39 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:39.686403 | orchestrator | 2025-09-19 11:34:39 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:39.688270 | orchestrator | 2025-09-19 11:34:39 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:39.688647 | orchestrator | 2025-09-19 11:34:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:42.733427 | orchestrator | 2025-09-19 11:34:42 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:42.734699 | orchestrator | 2025-09-19 11:34:42 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:42.735455 | orchestrator | 2025-09-19 11:34:42 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:42.735480 | orchestrator | 2025-09-19 11:34:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:45.781486 | orchestrator | 2025-09-19 11:34:45 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:45.782349 | orchestrator | 2025-09-19 11:34:45 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:45.783512 | orchestrator | 2025-09-19 11:34:45 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:45.783545 | orchestrator | 2025-09-19 11:34:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:48.826698 | orchestrator | 2025-09-19 11:34:48 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:48.827806 | orchestrator | 2025-09-19 11:34:48 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:48.829364 | orchestrator | 2025-09-19 11:34:48 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:48.829388 | orchestrator | 2025-09-19 11:34:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:51.872252 | orchestrator | 2025-09-19 11:34:51 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:51.874186 | orchestrator | 2025-09-19 11:34:51 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:51.876631 | orchestrator | 2025-09-19 11:34:51 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:51.876658 | orchestrator | 2025-09-19 11:34:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:54.931160 | orchestrator | 2025-09-19 11:34:54 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:54.931603 | orchestrator | 2025-09-19 11:34:54 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:54.935983 | orchestrator | 2025-09-19 11:34:54 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:54.936016 | orchestrator | 2025-09-19 11:34:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:34:57.973792 | orchestrator | 2025-09-19 11:34:57 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:34:57.975283 | orchestrator | 2025-09-19 11:34:57 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:34:57.977393 | orchestrator | 2025-09-19 11:34:57 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:34:57.977684 | orchestrator | 2025-09-19 11:34:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:01.026960 | orchestrator | 2025-09-19 11:35:01 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:01.028490 | orchestrator | 2025-09-19 11:35:01 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:01.029980 | orchestrator | 2025-09-19 11:35:01 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:01.030007 | orchestrator | 2025-09-19 11:35:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:04.069144 | orchestrator | 2025-09-19 11:35:04 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:04.069513 | orchestrator | 2025-09-19 11:35:04 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:04.070561 | orchestrator | 2025-09-19 11:35:04 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:04.070884 | orchestrator | 2025-09-19 11:35:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:07.114243 | orchestrator | 2025-09-19 11:35:07 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:07.114619 | orchestrator | 2025-09-19 11:35:07 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:07.116671 | orchestrator | 2025-09-19 11:35:07 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:07.116759 | orchestrator | 2025-09-19 11:35:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:10.168291 | orchestrator | 2025-09-19 11:35:10 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:10.173509 | orchestrator | 2025-09-19 11:35:10 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:10.174565 | orchestrator | 2025-09-19 11:35:10 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:10.174627 | orchestrator | 2025-09-19 11:35:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:13.215020 | orchestrator | 2025-09-19 11:35:13 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:13.217504 | orchestrator | 2025-09-19 11:35:13 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:13.218662 | orchestrator | 2025-09-19 11:35:13 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:13.218692 | orchestrator | 2025-09-19 11:35:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:16.270978 | orchestrator | 2025-09-19 11:35:16 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:16.272601 | orchestrator | 2025-09-19 11:35:16 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:16.274408 | orchestrator | 2025-09-19 11:35:16 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:16.274446 | orchestrator | 2025-09-19 11:35:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:19.328383 | orchestrator | 2025-09-19 11:35:19 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:19.331323 | orchestrator | 2025-09-19 11:35:19 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:19.332763 | orchestrator | 2025-09-19 11:35:19 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:19.332997 | orchestrator | 2025-09-19 11:35:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:22.382531 | orchestrator | 2025-09-19 11:35:22 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:22.382624 | orchestrator | 2025-09-19 11:35:22 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:22.383419 | orchestrator | 2025-09-19 11:35:22 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:22.383449 | orchestrator | 2025-09-19 11:35:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:25.430190 | orchestrator | 2025-09-19 11:35:25 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:25.434952 | orchestrator | 2025-09-19 11:35:25 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:25.438806 | orchestrator | 2025-09-19 11:35:25 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:25.438839 | orchestrator | 2025-09-19 11:35:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:28.488296 | orchestrator | 2025-09-19 11:35:28 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:28.490375 | orchestrator | 2025-09-19 11:35:28 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:28.492029 | orchestrator | 2025-09-19 11:35:28 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:28.492068 | orchestrator | 2025-09-19 11:35:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:31.537115 | orchestrator | 2025-09-19 11:35:31 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:31.538207 | orchestrator | 2025-09-19 11:35:31 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:31.540442 | orchestrator | 2025-09-19 11:35:31 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:31.540905 | orchestrator | 2025-09-19 11:35:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:34.585292 | orchestrator | 2025-09-19 11:35:34 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:34.587081 | orchestrator | 2025-09-19 11:35:34 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:34.589503 | orchestrator | 2025-09-19 11:35:34 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:34.589546 | orchestrator | 2025-09-19 11:35:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:37.642013 | orchestrator | 2025-09-19 11:35:37 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:37.642163 | orchestrator | 2025-09-19 11:35:37 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:37.643236 | orchestrator | 2025-09-19 11:35:37 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:37.643269 | orchestrator | 2025-09-19 11:35:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:40.701957 | orchestrator | 2025-09-19 11:35:40 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:40.702112 | orchestrator | 2025-09-19 11:35:40 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state STARTED 2025-09-19 11:35:40.704313 | orchestrator | 2025-09-19 11:35:40 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:40.704412 | orchestrator | 2025-09-19 11:35:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:43.772258 | orchestrator | 2025-09-19 11:35:43 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:43.780004 | orchestrator | 2025-09-19 11:35:43 | INFO  | Task 5e6fa74d-6eb4-43c9-98f3-e14aaef96cf7 is in state SUCCESS 2025-09-19 11:35:43.785421 | orchestrator | 2025-09-19 11:35:43.785490 | orchestrator | 2025-09-19 11:35:43.785503 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-19 11:35:43.785515 | orchestrator | 2025-09-19 11:35:43.785526 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-19 11:35:43.785537 | orchestrator | Friday 19 September 2025 11:24:52 +0000 (0:00:00.906) 0:00:00.906 ****** 2025-09-19 11:35:43.785549 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.785561 | orchestrator | 2025-09-19 11:35:43.785572 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-19 11:35:43.785585 | orchestrator | Friday 19 September 2025 11:24:53 +0000 (0:00:01.196) 0:00:02.103 ****** 2025-09-19 11:35:43.785602 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.785622 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.785639 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.785794 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.785808 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.785819 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.785829 | orchestrator | 2025-09-19 11:35:43.785840 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-19 11:35:43.785851 | orchestrator | Friday 19 September 2025 11:24:55 +0000 (0:00:01.579) 0:00:03.683 ****** 2025-09-19 11:35:43.785862 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.785873 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.785883 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.785918 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.785929 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.785940 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.785951 | orchestrator | 2025-09-19 11:35:43.785961 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-19 11:35:43.785974 | orchestrator | Friday 19 September 2025 11:24:55 +0000 (0:00:00.776) 0:00:04.459 ****** 2025-09-19 11:35:43.786211 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.786225 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.786236 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.786246 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.786257 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.786267 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.786278 | orchestrator | 2025-09-19 11:35:43.786289 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-19 11:35:43.786299 | orchestrator | Friday 19 September 2025 11:24:57 +0000 (0:00:01.249) 0:00:05.708 ****** 2025-09-19 11:35:43.786310 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.786321 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.786331 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.786342 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.786352 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.786363 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.786373 | orchestrator | 2025-09-19 11:35:43.786384 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-19 11:35:43.786395 | orchestrator | Friday 19 September 2025 11:24:57 +0000 (0:00:00.621) 0:00:06.330 ****** 2025-09-19 11:35:43.786406 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.786416 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.786427 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.786437 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.786448 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.786459 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.786469 | orchestrator | 2025-09-19 11:35:43.786480 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-19 11:35:43.786491 | orchestrator | Friday 19 September 2025 11:24:58 +0000 (0:00:00.525) 0:00:06.855 ****** 2025-09-19 11:35:43.786501 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.786512 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.786522 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.786533 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.786544 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.786554 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.786565 | orchestrator | 2025-09-19 11:35:43.786575 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-19 11:35:43.786587 | orchestrator | Friday 19 September 2025 11:24:58 +0000 (0:00:00.778) 0:00:07.634 ****** 2025-09-19 11:35:43.786598 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.786609 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.786620 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.786630 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.786644 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.786704 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.786726 | orchestrator | 2025-09-19 11:35:43.786739 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-19 11:35:43.786750 | orchestrator | Friday 19 September 2025 11:24:59 +0000 (0:00:00.724) 0:00:08.359 ****** 2025-09-19 11:35:43.786761 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.786772 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.786782 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.786793 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.786803 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.786813 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.786824 | orchestrator | 2025-09-19 11:35:43.786834 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-19 11:35:43.786845 | orchestrator | Friday 19 September 2025 11:25:01 +0000 (0:00:01.548) 0:00:09.907 ****** 2025-09-19 11:35:43.786868 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 11:35:43.786879 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:35:43.786890 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:35:43.786900 | orchestrator | 2025-09-19 11:35:43.786911 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-19 11:35:43.786921 | orchestrator | Friday 19 September 2025 11:25:01 +0000 (0:00:00.610) 0:00:10.518 ****** 2025-09-19 11:35:43.786932 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.786942 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.786952 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.786963 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.786973 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.786983 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.786994 | orchestrator | 2025-09-19 11:35:43.787019 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-19 11:35:43.787102 | orchestrator | Friday 19 September 2025 11:25:03 +0000 (0:00:01.632) 0:00:12.150 ****** 2025-09-19 11:35:43.787116 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 11:35:43.787127 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:35:43.787138 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:35:43.787149 | orchestrator | 2025-09-19 11:35:43.787160 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-19 11:35:43.787171 | orchestrator | Friday 19 September 2025 11:25:06 +0000 (0:00:03.297) 0:00:15.448 ****** 2025-09-19 11:35:43.787181 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 11:35:43.787192 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 11:35:43.787202 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 11:35:43.787213 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.787224 | orchestrator | 2025-09-19 11:35:43.787234 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-19 11:35:43.787245 | orchestrator | Friday 19 September 2025 11:25:07 +0000 (0:00:00.876) 0:00:16.325 ****** 2025-09-19 11:35:43.787258 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.787272 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.787282 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.787293 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.787304 | orchestrator | 2025-09-19 11:35:43.787315 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-19 11:35:43.787326 | orchestrator | Friday 19 September 2025 11:25:08 +0000 (0:00:00.883) 0:00:17.208 ****** 2025-09-19 11:35:43.787339 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.787353 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.787378 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.787390 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.787400 | orchestrator | 2025-09-19 11:35:43.787411 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-19 11:35:43.787422 | orchestrator | Friday 19 September 2025 11:25:09 +0000 (0:00:00.454) 0:00:17.663 ****** 2025-09-19 11:35:43.787435 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-19 11:25:04.262996', 'end': '2025-09-19 11:25:04.552381', 'delta': '0:00:00.289385', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.787461 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-19 11:25:05.155602', 'end': '2025-09-19 11:25:05.472593', 'delta': '0:00:00.316991', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.787474 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-19 11:25:06.233931', 'end': '2025-09-19 11:25:06.555078', 'delta': '0:00:00.321147', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.787486 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.787496 | orchestrator | 2025-09-19 11:35:43.787535 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-19 11:35:43.787548 | orchestrator | Friday 19 September 2025 11:25:09 +0000 (0:00:00.389) 0:00:18.052 ****** 2025-09-19 11:35:43.787558 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.787569 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.787579 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.787590 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.787600 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.787611 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.787630 | orchestrator | 2025-09-19 11:35:43.787640 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-19 11:35:43.787679 | orchestrator | Friday 19 September 2025 11:25:12 +0000 (0:00:02.864) 0:00:20.917 ****** 2025-09-19 11:35:43.787698 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.787717 | orchestrator | 2025-09-19 11:35:43.787735 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-19 11:35:43.787753 | orchestrator | Friday 19 September 2025 11:25:13 +0000 (0:00:00.954) 0:00:21.872 ****** 2025-09-19 11:35:43.787765 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.787775 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.787786 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.787796 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.787807 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.787818 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.787828 | orchestrator | 2025-09-19 11:35:43.787839 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-19 11:35:43.787849 | orchestrator | Friday 19 September 2025 11:25:15 +0000 (0:00:02.211) 0:00:24.084 ****** 2025-09-19 11:35:43.787860 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.787870 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.787881 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.787891 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.787902 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.787912 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.787922 | orchestrator | 2025-09-19 11:35:43.787933 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 11:35:43.787950 | orchestrator | Friday 19 September 2025 11:25:17 +0000 (0:00:01.662) 0:00:25.746 ****** 2025-09-19 11:35:43.787961 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.787971 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.787982 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.787992 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.788003 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.788013 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.788024 | orchestrator | 2025-09-19 11:35:43.788034 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-19 11:35:43.788045 | orchestrator | Friday 19 September 2025 11:25:17 +0000 (0:00:00.851) 0:00:26.598 ****** 2025-09-19 11:35:43.788055 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.788066 | orchestrator | 2025-09-19 11:35:43.788076 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-19 11:35:43.788247 | orchestrator | Friday 19 September 2025 11:25:18 +0000 (0:00:00.131) 0:00:26.730 ****** 2025-09-19 11:35:43.788258 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.788269 | orchestrator | 2025-09-19 11:35:43.788279 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 11:35:43.788290 | orchestrator | Friday 19 September 2025 11:25:18 +0000 (0:00:00.430) 0:00:27.160 ****** 2025-09-19 11:35:43.788300 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.788311 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.788321 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.788332 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.788342 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.788353 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.788363 | orchestrator | 2025-09-19 11:35:43.788374 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-19 11:35:43.788393 | orchestrator | Friday 19 September 2025 11:25:19 +0000 (0:00:00.804) 0:00:27.965 ****** 2025-09-19 11:35:43.788404 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.788415 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.788425 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.788436 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.788455 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.788465 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.788476 | orchestrator | 2025-09-19 11:35:43.788486 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-19 11:35:43.788497 | orchestrator | Friday 19 September 2025 11:25:20 +0000 (0:00:00.975) 0:00:28.941 ****** 2025-09-19 11:35:43.788508 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.788518 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.788529 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.788539 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.788550 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.788560 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.788571 | orchestrator | 2025-09-19 11:35:43.788581 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-19 11:35:43.788592 | orchestrator | Friday 19 September 2025 11:25:21 +0000 (0:00:00.774) 0:00:29.715 ****** 2025-09-19 11:35:43.788603 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.788613 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.788624 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.788634 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.788644 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.788702 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.788713 | orchestrator | 2025-09-19 11:35:43.788724 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-19 11:35:43.788735 | orchestrator | Friday 19 September 2025 11:25:21 +0000 (0:00:00.767) 0:00:30.483 ****** 2025-09-19 11:35:43.788745 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.788756 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.788766 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.788777 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.788788 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.788798 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.788809 | orchestrator | 2025-09-19 11:35:43.788819 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-19 11:35:43.788830 | orchestrator | Friday 19 September 2025 11:25:22 +0000 (0:00:00.855) 0:00:31.338 ****** 2025-09-19 11:35:43.788841 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.788851 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.788862 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.788873 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.788883 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.788893 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.788904 | orchestrator | 2025-09-19 11:35:43.789052 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-19 11:35:43.789067 | orchestrator | Friday 19 September 2025 11:25:23 +0000 (0:00:00.648) 0:00:31.986 ****** 2025-09-19 11:35:43.789078 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.789104 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.789115 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.789125 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.789136 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.789146 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.789157 | orchestrator | 2025-09-19 11:35:43.789168 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-19 11:35:43.789178 | orchestrator | Friday 19 September 2025 11:25:23 +0000 (0:00:00.485) 0:00:32.471 ****** 2025-09-19 11:35:43.789190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce', 'scsi-SQEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.789348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.789361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789394 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.789406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62', 'scsi-SQEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part1', 'scsi-SQEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part14', 'scsi-SQEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part15', 'scsi-SQEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part16', 'scsi-SQEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.789504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.789523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789561 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.789572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb', 'scsi-SQEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part1', 'scsi-SQEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part14', 'scsi-SQEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part15', 'scsi-SQEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part16', 'scsi-SQEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.789688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.789701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f2e5a9ae--16db--5885--a5f1--5293896cd0a9-osd--block--f2e5a9ae--16db--5885--a5f1--5293896cd0a9', 'dm-uuid-LVM-T0qtfsVXAM2pxgkSZHPOh8wOanAOcnyXtrQDNWKQpMdeLKVaBer12Y5MriBAgVYI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5-osd--block--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5', 'dm-uuid-LVM-u2rmXfbzi0TuTIdRJEkihfDRShJacu7nwni3ibQB2pd4SpbFkYjAfzf4Sfdt0x2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789794 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.789808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.789876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790067 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f2e5a9ae--16db--5885--a5f1--5293896cd0a9-osd--block--f2e5a9ae--16db--5885--a5f1--5293896cd0a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BdEUeW-T1x2-3zEI-sGKj-LbaC-JGTN-0d2P5Z', 'scsi-0QEMU_QEMU_HARDDISK_729b54dd-f4c1-4a98-9e39-7aa2dbdf058c', 'scsi-SQEMU_QEMU_HARDDISK_729b54dd-f4c1-4a98-9e39-7aa2dbdf058c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--499bb3ba--5d36--55d4--9ab4--77fea8769c5a-osd--block--499bb3ba--5d36--55d4--9ab4--77fea8769c5a', 'dm-uuid-LVM-sKkYbBtPH7TYB3qfRwMoXcTlubcZTSnUwbyaQ36SqEI2lNR4qCbIkTanXU63GGfj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--482defc3--95b3--50a2--a4e9--5dea1f7a25a6-osd--block--482defc3--95b3--50a2--a4e9--5dea1f7a25a6', 'dm-uuid-LVM-Sl0oI0DJ7k2WfSqpCpDPMQAJ3ZO72PP8zuJsSfJnx1r8Dx3XYQOxuPl2OhsGiW57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5-osd--block--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OD12ed-tnfe-q2vC-3MJo-XQuM-2lVY-yBnEkJ', 'scsi-0QEMU_QEMU_HARDDISK_ff354216-c1d2-4110-b9e3-f4cf06b21a62', 'scsi-SQEMU_QEMU_HARDDISK_ff354216-c1d2-4110-b9e3-f4cf06b21a62'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2859ea6e-5cf3-4595-8353-f67711d21d4e', 'scsi-SQEMU_QEMU_HARDDISK_2859ea6e-5cf3-4595-8353-f67711d21d4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790224 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.790243 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6-osd--block--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6', 'dm-uuid-LVM-FoG8G6pM9fdL9UmfNP40N67XYHhtV7O75sHctXcNSZ3xMwuxruSzQBMWTX3PJZ3g'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--499bb3ba--5d36--55d4--9ab4--77fea8769c5a-osd--block--499bb3ba--5d36--55d4--9ab4--77fea8769c5a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TejcCD-UdZ2-c8zU-pqzM-8B6r-uMOu-IbZL3W', 'scsi-0QEMU_QEMU_HARDDISK_a7da52da-8ff9-443f-9c01-2997209c642a', 'scsi-SQEMU_QEMU_HARDDISK_a7da52da-8ff9-443f-9c01-2997209c642a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9f018b0b--9dc8--5104--9bc9--2c288294c8fd-osd--block--9f018b0b--9dc8--5104--9bc9--2c288294c8fd', 'dm-uuid-LVM-5r8hi0765R3tOEAFRn6eSUN63tC5cvhQCCRz4D05AtspdLxUkd72JgtEFGklhg06'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--482defc3--95b3--50a2--a4e9--5dea1f7a25a6-osd--block--482defc3--95b3--50a2--a4e9--5dea1f7a25a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Afv5wH-m8oE-CrJP-EkRU-7lo4-wmhy-8decif', 'scsi-0QEMU_QEMU_HARDDISK_2d05b72c-4493-4412-ad25-c0b6cbf3de12', 'scsi-SQEMU_QEMU_HARDDISK_2d05b72c-4493-4412-ad25-c0b6cbf3de12'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c', 'scsi-SQEMU_QEMU_HARDDISK_a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790464 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.790476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:35:43.790545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part1', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part14', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part15', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part16', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6-osd--block--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P1hpgE-vTn7-lguI-7OdR-dZzc-V1cJ-ofZPGd', 'scsi-0QEMU_QEMU_HARDDISK_4ab3eba9-7f04-4545-b862-1d19a7d78b14', 'scsi-SQEMU_QEMU_HARDDISK_4ab3eba9-7f04-4545-b862-1d19a7d78b14'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9f018b0b--9dc8--5104--9bc9--2c288294c8fd-osd--block--9f018b0b--9dc8--5104--9bc9--2c288294c8fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-O1sv3G-jn1l-O4mD-DT5w-gp6W-Oe6c-ak2i7W', 'scsi-0QEMU_QEMU_HARDDISK_82c12b62-ffbd-484b-a107-b043e35ec15c', 'scsi-SQEMU_QEMU_HARDDISK_82c12b62-ffbd-484b-a107-b043e35ec15c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23c8bdec-2f7a-480a-98d1-592cee3b582b', 'scsi-SQEMU_QEMU_HARDDISK_23c8bdec-2f7a-480a-98d1-592cee3b582b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:35:43.790623 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.790634 | orchestrator | 2025-09-19 11:35:43.790645 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-19 11:35:43.790700 | orchestrator | Friday 19 September 2025 11:25:24 +0000 (0:00:00.991) 0:00:33.463 ****** 2025-09-19 11:35:43.790712 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.790731 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.790743 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.790754 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.790771 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.790782 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.792820 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.792873 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.792895 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce', 'scsi-SQEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part1', 'scsi-SQEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part14', 'scsi-SQEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part15', 'scsi-SQEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part16', 'scsi-SQEMU_QEMU_HARDDISK_ba575dcd-e52c-4614-affd-7d36970897ce-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.792983 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793005 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.793022 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793052 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793063 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793074 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793089 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793100 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793208 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793234 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793244 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793254 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793262 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793275 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793283 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793352 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793364 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793372 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793386 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62', 'scsi-SQEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part1', 'scsi-SQEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part14', 'scsi-SQEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part15', 'scsi-SQEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part16', 'scsi-SQEMU_QEMU_HARDDISK_20296197-9eb2-417a-a415-95b3bd769f62-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793458 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb', 'scsi-SQEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part1', 'scsi-SQEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part14', 'scsi-SQEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part15', 'scsi-SQEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part16', 'scsi-SQEMU_QEMU_HARDDISK_67c62fa1-59eb-40e5-ac72-c4fc0b8c04fb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793475 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793484 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793565 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f2e5a9ae--16db--5885--a5f1--5293896cd0a9-osd--block--f2e5a9ae--16db--5885--a5f1--5293896cd0a9', 'dm-uuid-LVM-T0qtfsVXAM2pxgkSZHPOh8wOanAOcnyXtrQDNWKQpMdeLKVaBer12Y5MriBAgVYI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793580 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5-osd--block--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5', 'dm-uuid-LVM-u2rmXfbzi0TuTIdRJEkihfDRShJacu7nwni3ibQB2pd4SpbFkYjAfzf4Sfdt0x2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793588 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793597 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793618 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793631 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.793715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793737 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793821 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793843 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f2e5a9ae--16db--5885--a5f1--5293896cd0a9-osd--block--f2e5a9ae--16db--5885--a5f1--5293896cd0a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BdEUeW-T1x2-3zEI-sGKj-LbaC-JGTN-0d2P5Z', 'scsi-0QEMU_QEMU_HARDDISK_729b54dd-f4c1-4a98-9e39-7aa2dbdf058c', 'scsi-SQEMU_QEMU_HARDDISK_729b54dd-f4c1-4a98-9e39-7aa2dbdf058c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793853 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5-osd--block--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OD12ed-tnfe-q2vC-3MJo-XQuM-2lVY-yBnEkJ', 'scsi-0QEMU_QEMU_HARDDISK_ff354216-c1d2-4110-b9e3-f4cf06b21a62', 'scsi-SQEMU_QEMU_HARDDISK_ff354216-c1d2-4110-b9e3-f4cf06b21a62'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2859ea6e-5cf3-4595-8353-f67711d21d4e', 'scsi-SQEMU_QEMU_HARDDISK_2859ea6e-5cf3-4595-8353-f67711d21d4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793940 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--499bb3ba--5d36--55d4--9ab4--77fea8769c5a-osd--block--499bb3ba--5d36--55d4--9ab4--77fea8769c5a', 'dm-uuid-LVM-sKkYbBtPH7TYB3qfRwMoXcTlubcZTSnUwbyaQ36SqEI2lNR4qCbIkTanXU63GGfj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--482defc3--95b3--50a2--a4e9--5dea1f7a25a6-osd--block--482defc3--95b3--50a2--a4e9--5dea1f7a25a6', 'dm-uuid-LVM-Sl0oI0DJ7k2WfSqpCpDPMQAJ3ZO72PP8zuJsSfJnx1r8Dx3XYQOxuPl2OhsGiW57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793961 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793969 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793981 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.793995 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.794004 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.794092 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794193 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794201 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794210 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6-osd--block--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6', 'dm-uuid-LVM-FoG8G6pM9fdL9UmfNP40N67XYHhtV7O75sHctXcNSZ3xMwuxruSzQBMWTX3PJZ3g'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794218 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9f018b0b--9dc8--5104--9bc9--2c288294c8fd-osd--block--9f018b0b--9dc8--5104--9bc9--2c288294c8fd', 'dm-uuid-LVM-5r8hi0765R3tOEAFRn6eSUN63tC5cvhQCCRz4D05AtspdLxUkd72JgtEFGklhg06'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794323 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794336 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794349 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--499bb3ba--5d36--55d4--9ab4--77fea8769c5a-osd--block--499bb3ba--5d36--55d4--9ab4--77fea8769c5a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TejcCD-UdZ2-c8zU-pqzM-8B6r-uMOu-IbZL3W', 'scsi-0QEMU_QEMU_HARDDISK_a7da52da-8ff9-443f-9c01-2997209c642a', 'scsi-SQEMU_QEMU_HARDDISK_a7da52da-8ff9-443f-9c01-2997209c642a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794365 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794424 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--482defc3--95b3--50a2--a4e9--5dea1f7a25a6-osd--block--482defc3--95b3--50a2--a4e9--5dea1f7a25a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Afv5wH-m8oE-CrJP-EkRU-7lo4-wmhy-8decif', 'scsi-0QEMU_QEMU_HARDDISK_2d05b72c-4493-4412-ad25-c0b6cbf3de12', 'scsi-SQEMU_QEMU_HARDDISK_2d05b72c-4493-4412-ad25-c0b6cbf3de12'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794436 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794444 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c', 'scsi-SQEMU_QEMU_HARDDISK_a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794460 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794474 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794483 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.794542 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794553 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794561 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794570 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794629 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part1', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part14', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part15', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part16', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794666 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6-osd--block--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P1hpgE-vTn7-lguI-7OdR-dZzc-V1cJ-ofZPGd', 'scsi-0QEMU_QEMU_HARDDISK_4ab3eba9-7f04-4545-b862-1d19a7d78b14', 'scsi-SQEMU_QEMU_HARDDISK_4ab3eba9-7f04-4545-b862-1d19a7d78b14'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794691 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9f018b0b--9dc8--5104--9bc9--2c288294c8fd-osd--block--9f018b0b--9dc8--5104--9bc9--2c288294c8fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-O1sv3G-jn1l-O4mD-DT5w-gp6W-Oe6c-ak2i7W', 'scsi-0QEMU_QEMU_HARDDISK_82c12b62-ffbd-484b-a107-b043e35ec15c', 'scsi-SQEMU_QEMU_HARDDISK_82c12b62-ffbd-484b-a107-b043e35ec15c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794700 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23c8bdec-2f7a-480a-98d1-592cee3b582b', 'scsi-SQEMU_QEMU_HARDDISK_23c8bdec-2f7a-480a-98d1-592cee3b582b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794714 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:35:43.794722 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.794730 | orchestrator | 2025-09-19 11:35:43.794738 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-19 11:35:43.794747 | orchestrator | Friday 19 September 2025 11:25:25 +0000 (0:00:00.905) 0:00:34.368 ****** 2025-09-19 11:35:43.794754 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.794762 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.794770 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.794832 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.794844 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.794851 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.794859 | orchestrator | 2025-09-19 11:35:43.794879 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-19 11:35:43.794888 | orchestrator | Friday 19 September 2025 11:25:26 +0000 (0:00:01.174) 0:00:35.543 ****** 2025-09-19 11:35:43.794895 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.794903 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.794911 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.794919 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.794926 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.794934 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.794942 | orchestrator | 2025-09-19 11:35:43.794950 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 11:35:43.794957 | orchestrator | Friday 19 September 2025 11:25:27 +0000 (0:00:00.815) 0:00:36.358 ****** 2025-09-19 11:35:43.794965 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.794973 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.794981 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.794988 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.794996 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.795004 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.795011 | orchestrator | 2025-09-19 11:35:43.795019 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 11:35:43.795027 | orchestrator | Friday 19 September 2025 11:25:28 +0000 (0:00:00.687) 0:00:37.045 ****** 2025-09-19 11:35:43.795035 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.795043 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.795050 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.795061 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.795074 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.795096 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.795110 | orchestrator | 2025-09-19 11:35:43.795125 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 11:35:43.795139 | orchestrator | Friday 19 September 2025 11:25:28 +0000 (0:00:00.466) 0:00:37.512 ****** 2025-09-19 11:35:43.795151 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.795159 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.795166 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.795174 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.795182 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.795200 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.795209 | orchestrator | 2025-09-19 11:35:43.795217 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 11:35:43.795225 | orchestrator | Friday 19 September 2025 11:25:29 +0000 (0:00:00.687) 0:00:38.199 ****** 2025-09-19 11:35:43.795232 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.795240 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.795247 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.795255 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.795263 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.795270 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.795278 | orchestrator | 2025-09-19 11:35:43.795286 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-19 11:35:43.795294 | orchestrator | Friday 19 September 2025 11:25:30 +0000 (0:00:00.735) 0:00:38.935 ****** 2025-09-19 11:35:43.795301 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 11:35:43.795309 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-19 11:35:43.795317 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 11:35:43.795325 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-19 11:35:43.795332 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-19 11:35:43.795340 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 11:35:43.795348 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-19 11:35:43.795355 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-19 11:35:43.795363 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-19 11:35:43.795398 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-19 11:35:43.795406 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-19 11:35:43.795414 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-19 11:35:43.795422 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-19 11:35:43.795433 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-19 11:35:43.795441 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-19 11:35:43.795449 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-19 11:35:43.795457 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-19 11:35:43.795466 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-19 11:35:43.795474 | orchestrator | 2025-09-19 11:35:43.795483 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-19 11:35:43.795492 | orchestrator | Friday 19 September 2025 11:25:33 +0000 (0:00:02.978) 0:00:41.913 ****** 2025-09-19 11:35:43.795500 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 11:35:43.795509 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 11:35:43.795518 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 11:35:43.795527 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.795535 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-19 11:35:43.795544 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-19 11:35:43.795553 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-19 11:35:43.795568 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-19 11:35:43.795577 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.795585 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-19 11:35:43.795593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 11:35:43.795601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 11:35:43.795641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 11:35:43.795709 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-19 11:35:43.795719 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.795726 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 11:35:43.795734 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 11:35:43.795742 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 11:35:43.795750 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.795758 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.795765 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 11:35:43.795773 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 11:35:43.795781 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 11:35:43.795789 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.795796 | orchestrator | 2025-09-19 11:35:43.795804 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-19 11:35:43.795812 | orchestrator | Friday 19 September 2025 11:25:34 +0000 (0:00:01.570) 0:00:43.484 ****** 2025-09-19 11:35:43.795820 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.795827 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.795835 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.795899 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.795907 | orchestrator | 2025-09-19 11:35:43.795916 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 11:35:43.795925 | orchestrator | Friday 19 September 2025 11:25:35 +0000 (0:00:01.041) 0:00:44.525 ****** 2025-09-19 11:35:43.795933 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.795941 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.795948 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.795956 | orchestrator | 2025-09-19 11:35:43.795964 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 11:35:43.795972 | orchestrator | Friday 19 September 2025 11:25:36 +0000 (0:00:00.480) 0:00:45.006 ****** 2025-09-19 11:35:43.795979 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.795987 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.795995 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.796003 | orchestrator | 2025-09-19 11:35:43.796010 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 11:35:43.796018 | orchestrator | Friday 19 September 2025 11:25:36 +0000 (0:00:00.598) 0:00:45.605 ****** 2025-09-19 11:35:43.796026 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.796034 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.796042 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.796050 | orchestrator | 2025-09-19 11:35:43.796058 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 11:35:43.796065 | orchestrator | Friday 19 September 2025 11:25:37 +0000 (0:00:00.484) 0:00:46.090 ****** 2025-09-19 11:35:43.796073 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.796081 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.796091 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.796105 | orchestrator | 2025-09-19 11:35:43.796119 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 11:35:43.796132 | orchestrator | Friday 19 September 2025 11:25:38 +0000 (0:00:00.695) 0:00:46.785 ****** 2025-09-19 11:35:43.796171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:35:43.796186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:35:43.796194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:35:43.796202 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.796210 | orchestrator | 2025-09-19 11:35:43.796217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 11:35:43.796224 | orchestrator | Friday 19 September 2025 11:25:38 +0000 (0:00:00.334) 0:00:47.119 ****** 2025-09-19 11:35:43.796230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:35:43.796237 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:35:43.796248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:35:43.796255 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.796262 | orchestrator | 2025-09-19 11:35:43.796268 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 11:35:43.796275 | orchestrator | Friday 19 September 2025 11:25:38 +0000 (0:00:00.391) 0:00:47.511 ****** 2025-09-19 11:35:43.796281 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:35:43.796288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:35:43.796294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:35:43.796301 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.796307 | orchestrator | 2025-09-19 11:35:43.796314 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 11:35:43.796320 | orchestrator | Friday 19 September 2025 11:25:39 +0000 (0:00:00.516) 0:00:48.027 ****** 2025-09-19 11:35:43.796327 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.796405 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.796414 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.796421 | orchestrator | 2025-09-19 11:35:43.796428 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 11:35:43.796434 | orchestrator | Friday 19 September 2025 11:25:39 +0000 (0:00:00.530) 0:00:48.558 ****** 2025-09-19 11:35:43.796441 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 11:35:43.796448 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 11:35:43.796454 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 11:35:43.796461 | orchestrator | 2025-09-19 11:35:43.796467 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-19 11:35:43.796474 | orchestrator | Friday 19 September 2025 11:25:40 +0000 (0:00:00.730) 0:00:49.288 ****** 2025-09-19 11:35:43.796501 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 11:35:43.796509 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:35:43.796516 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:35:43.796523 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-19 11:35:43.796529 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 11:35:43.796536 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 11:35:43.796542 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 11:35:43.796548 | orchestrator | 2025-09-19 11:35:43.796555 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-19 11:35:43.796561 | orchestrator | Friday 19 September 2025 11:25:41 +0000 (0:00:00.957) 0:00:50.246 ****** 2025-09-19 11:35:43.796568 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 11:35:43.796575 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:35:43.796581 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:35:43.796594 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-19 11:35:43.796600 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 11:35:43.796607 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 11:35:43.796613 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 11:35:43.796620 | orchestrator | 2025-09-19 11:35:43.796626 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:35:43.796633 | orchestrator | Friday 19 September 2025 11:25:43 +0000 (0:00:02.338) 0:00:52.585 ****** 2025-09-19 11:35:43.796639 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.796666 | orchestrator | 2025-09-19 11:35:43.796674 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:35:43.796680 | orchestrator | Friday 19 September 2025 11:25:45 +0000 (0:00:01.569) 0:00:54.154 ****** 2025-09-19 11:35:43.796687 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.796694 | orchestrator | 2025-09-19 11:35:43.796700 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:35:43.796707 | orchestrator | Friday 19 September 2025 11:25:47 +0000 (0:00:02.050) 0:00:56.204 ****** 2025-09-19 11:35:43.796713 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.796720 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.796726 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.796733 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.796739 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.796746 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.796752 | orchestrator | 2025-09-19 11:35:43.796759 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:35:43.796765 | orchestrator | Friday 19 September 2025 11:25:48 +0000 (0:00:01.390) 0:00:57.595 ****** 2025-09-19 11:35:43.796772 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.796778 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.796784 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.796791 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.796797 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.796804 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.796810 | orchestrator | 2025-09-19 11:35:43.796816 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:35:43.796827 | orchestrator | Friday 19 September 2025 11:25:50 +0000 (0:00:01.264) 0:00:58.859 ****** 2025-09-19 11:35:43.796834 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.796840 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.796847 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.796853 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.796860 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.796866 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.796872 | orchestrator | 2025-09-19 11:35:43.796879 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:35:43.796885 | orchestrator | Friday 19 September 2025 11:25:51 +0000 (0:00:01.610) 0:01:00.469 ****** 2025-09-19 11:35:43.796892 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.796898 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.796905 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.796911 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.796918 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.796924 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.796930 | orchestrator | 2025-09-19 11:35:43.796937 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:35:43.796948 | orchestrator | Friday 19 September 2025 11:25:53 +0000 (0:00:01.333) 0:01:01.803 ****** 2025-09-19 11:35:43.796955 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.796961 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.796968 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.796974 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.796981 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.796987 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.796994 | orchestrator | 2025-09-19 11:35:43.797000 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:35:43.797007 | orchestrator | Friday 19 September 2025 11:25:54 +0000 (0:00:01.214) 0:01:03.017 ****** 2025-09-19 11:35:43.797034 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.797041 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.797048 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.797055 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.797063 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.797070 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.797078 | orchestrator | 2025-09-19 11:35:43.797086 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:35:43.797094 | orchestrator | Friday 19 September 2025 11:25:55 +0000 (0:00:01.394) 0:01:04.412 ****** 2025-09-19 11:35:43.797101 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.797109 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.797116 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.797123 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.797134 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.797146 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.797157 | orchestrator | 2025-09-19 11:35:43.797170 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:35:43.797182 | orchestrator | Friday 19 September 2025 11:25:57 +0000 (0:00:01.265) 0:01:05.677 ****** 2025-09-19 11:35:43.797194 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.797205 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.797215 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.797223 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.797230 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.797238 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.797245 | orchestrator | 2025-09-19 11:35:43.797253 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:35:43.797261 | orchestrator | Friday 19 September 2025 11:25:59 +0000 (0:00:02.184) 0:01:07.862 ****** 2025-09-19 11:35:43.797269 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.797276 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.797283 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.797290 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.797298 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.797305 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.797312 | orchestrator | 2025-09-19 11:35:43.797320 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:35:43.797327 | orchestrator | Friday 19 September 2025 11:26:01 +0000 (0:00:02.367) 0:01:10.230 ****** 2025-09-19 11:35:43.797335 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.797342 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.797349 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.797357 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.797364 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.797372 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.797379 | orchestrator | 2025-09-19 11:35:43.797386 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:35:43.797394 | orchestrator | Friday 19 September 2025 11:26:02 +0000 (0:00:00.651) 0:01:10.882 ****** 2025-09-19 11:35:43.797401 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.797409 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.797417 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.797430 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.797437 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.797444 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.797450 | orchestrator | 2025-09-19 11:35:43.797457 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:35:43.797463 | orchestrator | Friday 19 September 2025 11:26:03 +0000 (0:00:00.841) 0:01:11.724 ****** 2025-09-19 11:35:43.797470 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.797476 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.797483 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.797489 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.797495 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.797502 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.797508 | orchestrator | 2025-09-19 11:35:43.797515 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:35:43.797521 | orchestrator | Friday 19 September 2025 11:26:03 +0000 (0:00:00.575) 0:01:12.299 ****** 2025-09-19 11:35:43.797528 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.797534 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.797541 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.797547 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.797554 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.797560 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.797567 | orchestrator | 2025-09-19 11:35:43.797573 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:35:43.797587 | orchestrator | Friday 19 September 2025 11:26:04 +0000 (0:00:00.791) 0:01:13.091 ****** 2025-09-19 11:35:43.797594 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.797600 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.797607 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.797613 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.797620 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.797626 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.797632 | orchestrator | 2025-09-19 11:35:43.797639 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:35:43.797646 | orchestrator | Friday 19 September 2025 11:26:05 +0000 (0:00:00.803) 0:01:13.895 ****** 2025-09-19 11:35:43.797671 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.797678 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.797685 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.797691 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.797697 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.797704 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.797710 | orchestrator | 2025-09-19 11:35:43.797717 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:35:43.797723 | orchestrator | Friday 19 September 2025 11:26:06 +0000 (0:00:00.982) 0:01:14.877 ****** 2025-09-19 11:35:43.797730 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.797736 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.797743 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.797749 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.797756 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.797762 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.797769 | orchestrator | 2025-09-19 11:35:43.797775 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:35:43.797804 | orchestrator | Friday 19 September 2025 11:26:06 +0000 (0:00:00.730) 0:01:15.608 ****** 2025-09-19 11:35:43.797812 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.797818 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.797825 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.797831 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.797838 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.797844 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.797851 | orchestrator | 2025-09-19 11:35:43.797862 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:35:43.797869 | orchestrator | Friday 19 September 2025 11:26:07 +0000 (0:00:00.903) 0:01:16.511 ****** 2025-09-19 11:35:43.797876 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.797882 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.797889 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.797895 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.797901 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.797908 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.797914 | orchestrator | 2025-09-19 11:35:43.797921 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:35:43.797928 | orchestrator | Friday 19 September 2025 11:26:08 +0000 (0:00:00.713) 0:01:17.224 ****** 2025-09-19 11:35:43.797934 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.797940 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.797947 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.797953 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.797960 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.797966 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.797973 | orchestrator | 2025-09-19 11:35:43.797979 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-19 11:35:43.797986 | orchestrator | Friday 19 September 2025 11:26:10 +0000 (0:00:01.496) 0:01:18.721 ****** 2025-09-19 11:35:43.797992 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.797999 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.798005 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.798012 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.798043 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.798050 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.798056 | orchestrator | 2025-09-19 11:35:43.798063 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-19 11:35:43.798069 | orchestrator | Friday 19 September 2025 11:26:12 +0000 (0:00:02.150) 0:01:20.871 ****** 2025-09-19 11:35:43.798076 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.798082 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.798089 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.798095 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.798102 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.798108 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.798115 | orchestrator | 2025-09-19 11:35:43.798121 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-19 11:35:43.798128 | orchestrator | Friday 19 September 2025 11:26:14 +0000 (0:00:02.342) 0:01:23.214 ****** 2025-09-19 11:35:43.798135 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.798142 | orchestrator | 2025-09-19 11:35:43.798148 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-19 11:35:43.798157 | orchestrator | Friday 19 September 2025 11:26:15 +0000 (0:00:01.112) 0:01:24.326 ****** 2025-09-19 11:35:43.798168 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.798180 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.798192 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.798204 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.798215 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.798227 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.798235 | orchestrator | 2025-09-19 11:35:43.798242 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-19 11:35:43.798248 | orchestrator | Friday 19 September 2025 11:26:16 +0000 (0:00:00.865) 0:01:25.192 ****** 2025-09-19 11:35:43.798255 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.798261 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.798268 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.798274 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.798286 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.798292 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.798299 | orchestrator | 2025-09-19 11:35:43.798310 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-19 11:35:43.798316 | orchestrator | Friday 19 September 2025 11:26:17 +0000 (0:00:00.789) 0:01:25.981 ****** 2025-09-19 11:35:43.798323 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 11:35:43.798330 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 11:35:43.798336 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 11:35:43.798342 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 11:35:43.798349 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 11:35:43.798355 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 11:35:43.798362 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 11:35:43.798368 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 11:35:43.798375 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 11:35:43.798381 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 11:35:43.798388 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 11:35:43.798394 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 11:35:43.798401 | orchestrator | 2025-09-19 11:35:43.798432 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-19 11:35:43.798440 | orchestrator | Friday 19 September 2025 11:26:19 +0000 (0:00:01.765) 0:01:27.747 ****** 2025-09-19 11:35:43.798446 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.798453 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.798460 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.798466 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.798473 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.798479 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.798486 | orchestrator | 2025-09-19 11:35:43.798492 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-19 11:35:43.798499 | orchestrator | Friday 19 September 2025 11:26:19 +0000 (0:00:00.872) 0:01:28.619 ****** 2025-09-19 11:35:43.798506 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.798512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.798519 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.798525 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.798532 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.798538 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.798544 | orchestrator | 2025-09-19 11:35:43.798551 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-19 11:35:43.798558 | orchestrator | Friday 19 September 2025 11:26:20 +0000 (0:00:00.715) 0:01:29.335 ****** 2025-09-19 11:35:43.798564 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.798571 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.798577 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.798584 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.798590 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.798597 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.798603 | orchestrator | 2025-09-19 11:35:43.798610 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-19 11:35:43.798616 | orchestrator | Friday 19 September 2025 11:26:21 +0000 (0:00:00.534) 0:01:29.870 ****** 2025-09-19 11:35:43.798623 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.798629 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.798640 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.798661 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.798669 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.798676 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.798682 | orchestrator | 2025-09-19 11:35:43.798689 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-19 11:35:43.798695 | orchestrator | Friday 19 September 2025 11:26:21 +0000 (0:00:00.671) 0:01:30.541 ****** 2025-09-19 11:35:43.798702 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.798708 | orchestrator | 2025-09-19 11:35:43.798715 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-19 11:35:43.798721 | orchestrator | Friday 19 September 2025 11:26:22 +0000 (0:00:01.048) 0:01:31.590 ****** 2025-09-19 11:35:43.798728 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.798734 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.798741 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.798747 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.798753 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.798760 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.798766 | orchestrator | 2025-09-19 11:35:43.798773 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-19 11:35:43.798779 | orchestrator | Friday 19 September 2025 11:27:07 +0000 (0:00:44.560) 0:02:16.150 ****** 2025-09-19 11:35:43.798786 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 11:35:43.798792 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 11:35:43.798799 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 11:35:43.798805 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.798812 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 11:35:43.798818 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 11:35:43.798828 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 11:35:43.798835 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.798841 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 11:35:43.798848 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 11:35:43.798854 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 11:35:43.798861 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.798867 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 11:35:43.798874 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 11:35:43.798880 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 11:35:43.798886 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.798893 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 11:35:43.798899 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 11:35:43.798906 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 11:35:43.798912 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.798919 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 11:35:43.798925 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 11:35:43.798932 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 11:35:43.798958 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.798966 | orchestrator | 2025-09-19 11:35:43.798980 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-19 11:35:43.798987 | orchestrator | Friday 19 September 2025 11:27:08 +0000 (0:00:00.765) 0:02:16.916 ****** 2025-09-19 11:35:43.798994 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.799000 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.799007 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.799013 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.799020 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.799026 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.799033 | orchestrator | 2025-09-19 11:35:43.799039 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-19 11:35:43.799046 | orchestrator | Friday 19 September 2025 11:27:08 +0000 (0:00:00.514) 0:02:17.431 ****** 2025-09-19 11:35:43.799052 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.799059 | orchestrator | 2025-09-19 11:35:43.799065 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-19 11:35:43.799072 | orchestrator | Friday 19 September 2025 11:27:08 +0000 (0:00:00.125) 0:02:17.557 ****** 2025-09-19 11:35:43.799078 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.799085 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.799091 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.799098 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.799104 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.799111 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.799117 | orchestrator | 2025-09-19 11:35:43.799124 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-19 11:35:43.799130 | orchestrator | Friday 19 September 2025 11:27:09 +0000 (0:00:00.922) 0:02:18.480 ****** 2025-09-19 11:35:43.799137 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.799143 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.799150 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.799156 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.799163 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.799169 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.799176 | orchestrator | 2025-09-19 11:35:43.799182 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-19 11:35:43.799192 | orchestrator | Friday 19 September 2025 11:27:10 +0000 (0:00:00.732) 0:02:19.212 ****** 2025-09-19 11:35:43.799204 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.799216 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.799229 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.799240 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.799251 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.799262 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.799269 | orchestrator | 2025-09-19 11:35:43.799275 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-19 11:35:43.799281 | orchestrator | Friday 19 September 2025 11:27:11 +0000 (0:00:00.783) 0:02:19.996 ****** 2025-09-19 11:35:43.799288 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.799294 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.799301 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.799307 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.799314 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.799320 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.799326 | orchestrator | 2025-09-19 11:35:43.799333 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-19 11:35:43.799339 | orchestrator | Friday 19 September 2025 11:27:13 +0000 (0:00:02.061) 0:02:22.057 ****** 2025-09-19 11:35:43.799346 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.799352 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.799359 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.799365 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.799371 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.799378 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.799391 | orchestrator | 2025-09-19 11:35:43.799398 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-19 11:35:43.799404 | orchestrator | Friday 19 September 2025 11:27:14 +0000 (0:00:00.874) 0:02:22.931 ****** 2025-09-19 11:35:43.799412 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.799419 | orchestrator | 2025-09-19 11:35:43.799426 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-19 11:35:43.799436 | orchestrator | Friday 19 September 2025 11:27:15 +0000 (0:00:01.316) 0:02:24.247 ****** 2025-09-19 11:35:43.799443 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.799450 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.799456 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.799462 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.799469 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.799475 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.799482 | orchestrator | 2025-09-19 11:35:43.799488 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-19 11:35:43.799495 | orchestrator | Friday 19 September 2025 11:27:16 +0000 (0:00:00.652) 0:02:24.900 ****** 2025-09-19 11:35:43.799501 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.799508 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.799514 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.799520 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.799527 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.799533 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.799539 | orchestrator | 2025-09-19 11:35:43.799546 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-19 11:35:43.799552 | orchestrator | Friday 19 September 2025 11:27:17 +0000 (0:00:00.934) 0:02:25.834 ****** 2025-09-19 11:35:43.799559 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.799565 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.799571 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.799578 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.799584 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.799590 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.799597 | orchestrator | 2025-09-19 11:35:43.799603 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-19 11:35:43.799631 | orchestrator | Friday 19 September 2025 11:27:17 +0000 (0:00:00.670) 0:02:26.505 ****** 2025-09-19 11:35:43.799638 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.799645 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.799670 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.799677 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.799683 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.799690 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.799696 | orchestrator | 2025-09-19 11:35:43.799703 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-19 11:35:43.799709 | orchestrator | Friday 19 September 2025 11:27:18 +0000 (0:00:00.907) 0:02:27.412 ****** 2025-09-19 11:35:43.799716 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.799722 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.799729 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.799735 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.799742 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.799748 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.799755 | orchestrator | 2025-09-19 11:35:43.799761 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-19 11:35:43.799768 | orchestrator | Friday 19 September 2025 11:27:19 +0000 (0:00:00.754) 0:02:28.167 ****** 2025-09-19 11:35:43.799774 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.799781 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.799792 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.799798 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.799805 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.799811 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.799818 | orchestrator | 2025-09-19 11:35:43.799824 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-19 11:35:43.799831 | orchestrator | Friday 19 September 2025 11:27:20 +0000 (0:00:01.001) 0:02:29.168 ****** 2025-09-19 11:35:43.799838 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.799844 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.799851 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.799857 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.799863 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.799870 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.799876 | orchestrator | 2025-09-19 11:35:43.799883 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-19 11:35:43.799890 | orchestrator | Friday 19 September 2025 11:27:21 +0000 (0:00:00.659) 0:02:29.828 ****** 2025-09-19 11:35:43.799896 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.799902 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.799909 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.799915 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.799922 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.799928 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.799935 | orchestrator | 2025-09-19 11:35:43.799941 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-19 11:35:43.799948 | orchestrator | Friday 19 September 2025 11:27:22 +0000 (0:00:00.956) 0:02:30.785 ****** 2025-09-19 11:35:43.799954 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.799961 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.799967 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.799974 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.799980 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.799987 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.799993 | orchestrator | 2025-09-19 11:35:43.800000 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-19 11:35:43.800006 | orchestrator | Friday 19 September 2025 11:27:23 +0000 (0:00:01.387) 0:02:32.172 ****** 2025-09-19 11:35:43.800013 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.800020 | orchestrator | 2025-09-19 11:35:43.800026 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-19 11:35:43.800033 | orchestrator | Friday 19 September 2025 11:27:24 +0000 (0:00:01.152) 0:02:33.325 ****** 2025-09-19 11:35:43.800039 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-19 11:35:43.800046 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-19 11:35:43.800052 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-19 11:35:43.800062 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-19 11:35:43.800069 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-19 11:35:43.800075 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-19 11:35:43.800082 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-19 11:35:43.800088 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-19 11:35:43.800095 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-19 11:35:43.800101 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-19 11:35:43.800108 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-19 11:35:43.800114 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-19 11:35:43.800121 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-19 11:35:43.800132 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-19 11:35:43.800138 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-19 11:35:43.800145 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-19 11:35:43.800151 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-19 11:35:43.800158 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-19 11:35:43.800164 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-19 11:35:43.800171 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-19 11:35:43.800177 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-19 11:35:43.800203 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-19 11:35:43.800212 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-19 11:35:43.800223 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-19 11:35:43.800234 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-19 11:35:43.800246 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-19 11:35:43.800258 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-19 11:35:43.800269 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-19 11:35:43.800280 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-19 11:35:43.800290 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-19 11:35:43.800296 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-19 11:35:43.800303 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-19 11:35:43.800309 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-19 11:35:43.800316 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-19 11:35:43.800322 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-19 11:35:43.800328 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-19 11:35:43.800335 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-19 11:35:43.800341 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-19 11:35:43.800348 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-19 11:35:43.800354 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-19 11:35:43.800361 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-19 11:35:43.800367 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-19 11:35:43.800373 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-19 11:35:43.800380 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-19 11:35:43.800386 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-19 11:35:43.800393 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 11:35:43.800399 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-19 11:35:43.800406 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 11:35:43.800412 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-19 11:35:43.800418 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 11:35:43.800425 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-19 11:35:43.800431 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 11:35:43.800438 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 11:35:43.800444 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 11:35:43.800451 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 11:35:43.800457 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 11:35:43.800470 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 11:35:43.800477 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 11:35:43.800483 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 11:35:43.800490 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 11:35:43.800496 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 11:35:43.800503 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 11:35:43.800509 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 11:35:43.800519 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 11:35:43.800526 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 11:35:43.800532 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 11:35:43.800539 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 11:35:43.800545 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 11:35:43.800552 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 11:35:43.800558 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 11:35:43.800564 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 11:35:43.800571 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 11:35:43.800577 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 11:35:43.800584 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 11:35:43.800590 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 11:35:43.800597 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 11:35:43.800603 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 11:35:43.800610 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 11:35:43.800616 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 11:35:43.800644 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 11:35:43.800692 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-19 11:35:43.800699 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-19 11:35:43.800706 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 11:35:43.800712 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 11:35:43.800719 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-19 11:35:43.800725 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 11:35:43.800732 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-19 11:35:43.800738 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-19 11:35:43.800745 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-19 11:35:43.800752 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 11:35:43.800758 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-19 11:35:43.800764 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-19 11:35:43.800771 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-19 11:35:43.800777 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-19 11:35:43.800784 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-19 11:35:43.800790 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-19 11:35:43.800803 | orchestrator | 2025-09-19 11:35:43.800810 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-19 11:35:43.800816 | orchestrator | Friday 19 September 2025 11:27:31 +0000 (0:00:06.472) 0:02:39.798 ****** 2025-09-19 11:35:43.800823 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.800829 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.800835 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.800842 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.800849 | orchestrator | 2025-09-19 11:35:43.800855 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-19 11:35:43.800862 | orchestrator | Friday 19 September 2025 11:27:32 +0000 (0:00:01.426) 0:02:41.224 ****** 2025-09-19 11:35:43.800868 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.800875 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.800882 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.800888 | orchestrator | 2025-09-19 11:35:43.800895 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-19 11:35:43.800901 | orchestrator | Friday 19 September 2025 11:27:33 +0000 (0:00:00.826) 0:02:42.050 ****** 2025-09-19 11:35:43.800908 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.800914 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.800921 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.800927 | orchestrator | 2025-09-19 11:35:43.800934 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-19 11:35:43.800940 | orchestrator | Friday 19 September 2025 11:27:35 +0000 (0:00:01.621) 0:02:43.672 ****** 2025-09-19 11:35:43.800947 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.800959 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.800966 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.800972 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.800979 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.800985 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.800992 | orchestrator | 2025-09-19 11:35:43.800998 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-19 11:35:43.801005 | orchestrator | Friday 19 September 2025 11:27:36 +0000 (0:00:01.162) 0:02:44.835 ****** 2025-09-19 11:35:43.801011 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801017 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801023 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801029 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.801035 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.801041 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.801047 | orchestrator | 2025-09-19 11:35:43.801053 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-19 11:35:43.801059 | orchestrator | Friday 19 September 2025 11:27:36 +0000 (0:00:00.723) 0:02:45.559 ****** 2025-09-19 11:35:43.801065 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801071 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801077 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801083 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.801089 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.801095 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.801101 | orchestrator | 2025-09-19 11:35:43.801107 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-19 11:35:43.801117 | orchestrator | Friday 19 September 2025 11:27:38 +0000 (0:00:01.145) 0:02:46.705 ****** 2025-09-19 11:35:43.801123 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801129 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801154 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801162 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.801168 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.801174 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.801180 | orchestrator | 2025-09-19 11:35:43.801186 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-19 11:35:43.801192 | orchestrator | Friday 19 September 2025 11:27:38 +0000 (0:00:00.615) 0:02:47.320 ****** 2025-09-19 11:35:43.801198 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801204 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801210 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801216 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.801222 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.801228 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.801233 | orchestrator | 2025-09-19 11:35:43.801241 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-19 11:35:43.801250 | orchestrator | Friday 19 September 2025 11:27:39 +0000 (0:00:00.768) 0:02:48.088 ****** 2025-09-19 11:35:43.801261 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801271 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801282 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801293 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.801303 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.801313 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.801324 | orchestrator | 2025-09-19 11:35:43.801330 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-19 11:35:43.801337 | orchestrator | Friday 19 September 2025 11:27:40 +0000 (0:00:00.577) 0:02:48.666 ****** 2025-09-19 11:35:43.801343 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801349 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801355 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801361 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.801367 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.801372 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.801378 | orchestrator | 2025-09-19 11:35:43.801385 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-19 11:35:43.801391 | orchestrator | Friday 19 September 2025 11:27:40 +0000 (0:00:00.929) 0:02:49.595 ****** 2025-09-19 11:35:43.801397 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801403 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801409 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801414 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.801420 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.801426 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.801432 | orchestrator | 2025-09-19 11:35:43.801438 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-19 11:35:43.801445 | orchestrator | Friday 19 September 2025 11:27:41 +0000 (0:00:00.751) 0:02:50.346 ****** 2025-09-19 11:35:43.801450 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801456 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801462 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801468 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.801475 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.801481 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.801487 | orchestrator | 2025-09-19 11:35:43.801493 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-19 11:35:43.801499 | orchestrator | Friday 19 September 2025 11:27:45 +0000 (0:00:03.548) 0:02:53.895 ****** 2025-09-19 11:35:43.801510 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801516 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801522 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801528 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.801534 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.801540 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.801546 | orchestrator | 2025-09-19 11:35:43.801553 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-19 11:35:43.801559 | orchestrator | Friday 19 September 2025 11:27:45 +0000 (0:00:00.606) 0:02:54.502 ****** 2025-09-19 11:35:43.801565 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801571 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801577 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801583 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.801589 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.801595 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.801601 | orchestrator | 2025-09-19 11:35:43.801610 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-19 11:35:43.801616 | orchestrator | Friday 19 September 2025 11:27:46 +0000 (0:00:01.022) 0:02:55.524 ****** 2025-09-19 11:35:43.801622 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801628 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801634 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801640 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.801660 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.801668 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.801674 | orchestrator | 2025-09-19 11:35:43.801680 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-19 11:35:43.801686 | orchestrator | Friday 19 September 2025 11:27:47 +0000 (0:00:00.822) 0:02:56.347 ****** 2025-09-19 11:35:43.801692 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801698 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801704 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801710 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.801716 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.801722 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.801729 | orchestrator | 2025-09-19 11:35:43.801735 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-19 11:35:43.801762 | orchestrator | Friday 19 September 2025 11:27:48 +0000 (0:00:01.120) 0:02:57.467 ****** 2025-09-19 11:35:43.801769 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801775 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801781 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801788 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-19 11:35:43.801796 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-19 11:35:43.801803 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.801810 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-19 11:35:43.801821 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-19 11:35:43.801827 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.801834 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-19 11:35:43.801840 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-19 11:35:43.801846 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.801852 | orchestrator | 2025-09-19 11:35:43.801858 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-19 11:35:43.801864 | orchestrator | Friday 19 September 2025 11:27:49 +0000 (0:00:00.987) 0:02:58.455 ****** 2025-09-19 11:35:43.801870 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801876 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801882 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801888 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.801895 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.801901 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.801907 | orchestrator | 2025-09-19 11:35:43.801913 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-19 11:35:43.801919 | orchestrator | Friday 19 September 2025 11:27:50 +0000 (0:00:00.815) 0:02:59.270 ****** 2025-09-19 11:35:43.801925 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801931 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801937 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.801943 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.801949 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.801959 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.801965 | orchestrator | 2025-09-19 11:35:43.801971 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 11:35:43.801977 | orchestrator | Friday 19 September 2025 11:27:51 +0000 (0:00:00.706) 0:02:59.976 ****** 2025-09-19 11:35:43.801983 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.801989 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.801995 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.802001 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.802007 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.802013 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.802055 | orchestrator | 2025-09-19 11:35:43.802061 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 11:35:43.802067 | orchestrator | Friday 19 September 2025 11:27:52 +0000 (0:00:00.789) 0:03:00.765 ****** 2025-09-19 11:35:43.802073 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.802079 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.802085 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.802091 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.802097 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.802103 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.802109 | orchestrator | 2025-09-19 11:35:43.802116 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 11:35:43.802126 | orchestrator | Friday 19 September 2025 11:27:52 +0000 (0:00:00.605) 0:03:01.371 ****** 2025-09-19 11:35:43.802132 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.802138 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.802144 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.802169 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.802176 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.802182 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.802188 | orchestrator | 2025-09-19 11:35:43.802194 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 11:35:43.802200 | orchestrator | Friday 19 September 2025 11:27:53 +0000 (0:00:00.761) 0:03:02.133 ****** 2025-09-19 11:35:43.802206 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.802212 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.802219 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.802224 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.802231 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.802237 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.802243 | orchestrator | 2025-09-19 11:35:43.802249 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 11:35:43.802255 | orchestrator | Friday 19 September 2025 11:27:54 +0000 (0:00:00.851) 0:03:02.984 ****** 2025-09-19 11:35:43.802261 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 11:35:43.802267 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 11:35:43.802275 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 11:35:43.802286 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.802296 | orchestrator | 2025-09-19 11:35:43.802307 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 11:35:43.802318 | orchestrator | Friday 19 September 2025 11:27:55 +0000 (0:00:00.687) 0:03:03.672 ****** 2025-09-19 11:35:43.802328 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 11:35:43.802338 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 11:35:43.802346 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 11:35:43.802352 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.802358 | orchestrator | 2025-09-19 11:35:43.802365 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 11:35:43.802371 | orchestrator | Friday 19 September 2025 11:27:55 +0000 (0:00:00.766) 0:03:04.439 ****** 2025-09-19 11:35:43.802377 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 11:35:43.802382 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 11:35:43.802388 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 11:35:43.802394 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.802400 | orchestrator | 2025-09-19 11:35:43.802406 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 11:35:43.802412 | orchestrator | Friday 19 September 2025 11:27:56 +0000 (0:00:01.003) 0:03:05.442 ****** 2025-09-19 11:35:43.802418 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.802424 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.802430 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.802436 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.802442 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.802448 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.802454 | orchestrator | 2025-09-19 11:35:43.802460 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 11:35:43.802466 | orchestrator | Friday 19 September 2025 11:27:57 +0000 (0:00:00.742) 0:03:06.185 ****** 2025-09-19 11:35:43.802472 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-19 11:35:43.802478 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.802484 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-19 11:35:43.802490 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-19 11:35:43.802500 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.802506 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.802512 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 11:35:43.802518 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 11:35:43.802524 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 11:35:43.802530 | orchestrator | 2025-09-19 11:35:43.802536 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-19 11:35:43.802542 | orchestrator | Friday 19 September 2025 11:28:00 +0000 (0:00:03.063) 0:03:09.249 ****** 2025-09-19 11:35:43.802548 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.802554 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.802560 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.802565 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.802571 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.802581 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.802587 | orchestrator | 2025-09-19 11:35:43.802594 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 11:35:43.802600 | orchestrator | Friday 19 September 2025 11:28:03 +0000 (0:00:03.193) 0:03:12.442 ****** 2025-09-19 11:35:43.802606 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.802612 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.802617 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.802623 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.802629 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.802635 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.802641 | orchestrator | 2025-09-19 11:35:43.802663 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-19 11:35:43.802671 | orchestrator | Friday 19 September 2025 11:28:05 +0000 (0:00:01.478) 0:03:13.921 ****** 2025-09-19 11:35:43.802676 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.802682 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.802688 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.802694 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.802700 | orchestrator | 2025-09-19 11:35:43.802706 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-19 11:35:43.802712 | orchestrator | Friday 19 September 2025 11:28:06 +0000 (0:00:01.331) 0:03:15.253 ****** 2025-09-19 11:35:43.802718 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.802724 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.802730 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.802736 | orchestrator | 2025-09-19 11:35:43.802742 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-19 11:35:43.802770 | orchestrator | Friday 19 September 2025 11:28:06 +0000 (0:00:00.305) 0:03:15.559 ****** 2025-09-19 11:35:43.802777 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.802783 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.802789 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.802795 | orchestrator | 2025-09-19 11:35:43.802801 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-19 11:35:43.802807 | orchestrator | Friday 19 September 2025 11:28:08 +0000 (0:00:01.442) 0:03:17.001 ****** 2025-09-19 11:35:43.802813 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 11:35:43.802819 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 11:35:43.802825 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 11:35:43.802831 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.802837 | orchestrator | 2025-09-19 11:35:43.802843 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-19 11:35:43.802849 | orchestrator | Friday 19 September 2025 11:28:09 +0000 (0:00:00.731) 0:03:17.732 ****** 2025-09-19 11:35:43.802855 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.802866 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.802872 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.802878 | orchestrator | 2025-09-19 11:35:43.802884 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-19 11:35:43.802890 | orchestrator | Friday 19 September 2025 11:28:09 +0000 (0:00:00.539) 0:03:18.272 ****** 2025-09-19 11:35:43.802896 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.802902 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.802908 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.802914 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.802920 | orchestrator | 2025-09-19 11:35:43.802926 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-19 11:35:43.802932 | orchestrator | Friday 19 September 2025 11:28:10 +0000 (0:00:00.966) 0:03:19.239 ****** 2025-09-19 11:35:43.802939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:35:43.802945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:35:43.802951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:35:43.802957 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.802963 | orchestrator | 2025-09-19 11:35:43.802969 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-19 11:35:43.802975 | orchestrator | Friday 19 September 2025 11:28:11 +0000 (0:00:00.567) 0:03:19.806 ****** 2025-09-19 11:35:43.802981 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.802987 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.802993 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.802999 | orchestrator | 2025-09-19 11:35:43.803005 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-19 11:35:43.803011 | orchestrator | Friday 19 September 2025 11:28:11 +0000 (0:00:00.669) 0:03:20.475 ****** 2025-09-19 11:35:43.803017 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803023 | orchestrator | 2025-09-19 11:35:43.803029 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-19 11:35:43.803035 | orchestrator | Friday 19 September 2025 11:28:12 +0000 (0:00:00.279) 0:03:20.754 ****** 2025-09-19 11:35:43.803041 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.803047 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.803053 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803059 | orchestrator | 2025-09-19 11:35:43.803065 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-19 11:35:43.803071 | orchestrator | Friday 19 September 2025 11:28:12 +0000 (0:00:00.387) 0:03:21.142 ****** 2025-09-19 11:35:43.803077 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803083 | orchestrator | 2025-09-19 11:35:43.803089 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-19 11:35:43.803095 | orchestrator | Friday 19 September 2025 11:28:12 +0000 (0:00:00.274) 0:03:21.416 ****** 2025-09-19 11:35:43.803101 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803107 | orchestrator | 2025-09-19 11:35:43.803113 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-19 11:35:43.803123 | orchestrator | Friday 19 September 2025 11:28:12 +0000 (0:00:00.222) 0:03:21.638 ****** 2025-09-19 11:35:43.803129 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803135 | orchestrator | 2025-09-19 11:35:43.803141 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-19 11:35:43.803147 | orchestrator | Friday 19 September 2025 11:28:13 +0000 (0:00:00.165) 0:03:21.804 ****** 2025-09-19 11:35:43.803153 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803159 | orchestrator | 2025-09-19 11:35:43.803165 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-19 11:35:43.803171 | orchestrator | Friday 19 September 2025 11:28:13 +0000 (0:00:00.223) 0:03:22.028 ****** 2025-09-19 11:35:43.803177 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803187 | orchestrator | 2025-09-19 11:35:43.803193 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-19 11:35:43.803199 | orchestrator | Friday 19 September 2025 11:28:13 +0000 (0:00:00.190) 0:03:22.219 ****** 2025-09-19 11:35:43.803205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:35:43.803211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:35:43.803217 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:35:43.803223 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803229 | orchestrator | 2025-09-19 11:35:43.803235 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-19 11:35:43.803241 | orchestrator | Friday 19 September 2025 11:28:14 +0000 (0:00:00.604) 0:03:22.823 ****** 2025-09-19 11:35:43.803247 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803253 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.803259 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.803265 | orchestrator | 2025-09-19 11:35:43.803288 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-19 11:35:43.803297 | orchestrator | Friday 19 September 2025 11:28:14 +0000 (0:00:00.659) 0:03:23.482 ****** 2025-09-19 11:35:43.803307 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803318 | orchestrator | 2025-09-19 11:35:43.803329 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-19 11:35:43.803340 | orchestrator | Friday 19 September 2025 11:28:15 +0000 (0:00:00.227) 0:03:23.710 ****** 2025-09-19 11:35:43.803350 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803360 | orchestrator | 2025-09-19 11:35:43.803371 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-19 11:35:43.803377 | orchestrator | Friday 19 September 2025 11:28:15 +0000 (0:00:00.203) 0:03:23.914 ****** 2025-09-19 11:35:43.803383 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.803389 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.803395 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.803401 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.803407 | orchestrator | 2025-09-19 11:35:43.803413 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-19 11:35:43.803419 | orchestrator | Friday 19 September 2025 11:28:16 +0000 (0:00:00.949) 0:03:24.864 ****** 2025-09-19 11:35:43.803425 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.803431 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.803437 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.803443 | orchestrator | 2025-09-19 11:35:43.803449 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-19 11:35:43.803455 | orchestrator | Friday 19 September 2025 11:28:16 +0000 (0:00:00.319) 0:03:25.183 ****** 2025-09-19 11:35:43.803461 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.803467 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.803473 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.803479 | orchestrator | 2025-09-19 11:35:43.803485 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-19 11:35:43.803491 | orchestrator | Friday 19 September 2025 11:28:17 +0000 (0:00:01.333) 0:03:26.517 ****** 2025-09-19 11:35:43.803497 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:35:43.803503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:35:43.803509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:35:43.803515 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803521 | orchestrator | 2025-09-19 11:35:43.803527 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-19 11:35:43.803533 | orchestrator | Friday 19 September 2025 11:28:18 +0000 (0:00:00.955) 0:03:27.472 ****** 2025-09-19 11:35:43.803539 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.803551 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.803557 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.803562 | orchestrator | 2025-09-19 11:35:43.803569 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-19 11:35:43.803575 | orchestrator | Friday 19 September 2025 11:28:19 +0000 (0:00:00.385) 0:03:27.857 ****** 2025-09-19 11:35:43.803581 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.803587 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.803593 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.803599 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.803605 | orchestrator | 2025-09-19 11:35:43.803611 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-19 11:35:43.803617 | orchestrator | Friday 19 September 2025 11:28:20 +0000 (0:00:01.377) 0:03:29.234 ****** 2025-09-19 11:35:43.803623 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.803629 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.803635 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.803641 | orchestrator | 2025-09-19 11:35:43.803663 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-19 11:35:43.803670 | orchestrator | Friday 19 September 2025 11:28:21 +0000 (0:00:00.561) 0:03:29.796 ****** 2025-09-19 11:35:43.803677 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.803683 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.803689 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.803695 | orchestrator | 2025-09-19 11:35:43.803705 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-19 11:35:43.803711 | orchestrator | Friday 19 September 2025 11:28:22 +0000 (0:00:01.710) 0:03:31.507 ****** 2025-09-19 11:35:43.803717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:35:43.803723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:35:43.803729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:35:43.803735 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803741 | orchestrator | 2025-09-19 11:35:43.803747 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-19 11:35:43.803753 | orchestrator | Friday 19 September 2025 11:28:23 +0000 (0:00:00.723) 0:03:32.230 ****** 2025-09-19 11:35:43.803759 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.803765 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.803771 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.803777 | orchestrator | 2025-09-19 11:35:43.803784 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-19 11:35:43.803790 | orchestrator | Friday 19 September 2025 11:28:24 +0000 (0:00:00.641) 0:03:32.872 ****** 2025-09-19 11:35:43.803796 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.803802 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.803808 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.803814 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803820 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.803826 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.803832 | orchestrator | 2025-09-19 11:35:43.803838 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-19 11:35:43.803844 | orchestrator | Friday 19 September 2025 11:28:25 +0000 (0:00:01.518) 0:03:34.390 ****** 2025-09-19 11:35:43.803872 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.803879 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.803885 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.803891 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-19 11:35:43.803898 | orchestrator | 2025-09-19 11:35:43.803904 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-19 11:35:43.803910 | orchestrator | Friday 19 September 2025 11:28:27 +0000 (0:00:01.772) 0:03:36.163 ****** 2025-09-19 11:35:43.803920 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.803927 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.803933 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.803939 | orchestrator | 2025-09-19 11:35:43.803945 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-19 11:35:43.803951 | orchestrator | Friday 19 September 2025 11:28:27 +0000 (0:00:00.434) 0:03:36.598 ****** 2025-09-19 11:35:43.803957 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.803963 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.803969 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.803975 | orchestrator | 2025-09-19 11:35:43.803981 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-19 11:35:43.803987 | orchestrator | Friday 19 September 2025 11:28:30 +0000 (0:00:02.167) 0:03:38.766 ****** 2025-09-19 11:35:43.803994 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 11:35:43.804000 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 11:35:43.804006 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 11:35:43.804011 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.804017 | orchestrator | 2025-09-19 11:35:43.804023 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-19 11:35:43.804030 | orchestrator | Friday 19 September 2025 11:28:30 +0000 (0:00:00.646) 0:03:39.412 ****** 2025-09-19 11:35:43.804036 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.804042 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.804048 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.804054 | orchestrator | 2025-09-19 11:35:43.804060 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-19 11:35:43.804066 | orchestrator | 2025-09-19 11:35:43.804072 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:35:43.804078 | orchestrator | Friday 19 September 2025 11:28:31 +0000 (0:00:00.933) 0:03:40.346 ****** 2025-09-19 11:35:43.804084 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.804090 | orchestrator | 2025-09-19 11:35:43.804096 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:35:43.804103 | orchestrator | Friday 19 September 2025 11:28:32 +0000 (0:00:00.700) 0:03:41.046 ****** 2025-09-19 11:35:43.804109 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.804115 | orchestrator | 2025-09-19 11:35:43.804121 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:35:43.804127 | orchestrator | Friday 19 September 2025 11:28:32 +0000 (0:00:00.552) 0:03:41.598 ****** 2025-09-19 11:35:43.804133 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.804139 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.804145 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.804151 | orchestrator | 2025-09-19 11:35:43.804157 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:35:43.804163 | orchestrator | Friday 19 September 2025 11:28:34 +0000 (0:00:01.225) 0:03:42.824 ****** 2025-09-19 11:35:43.804169 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.804175 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.804181 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.804187 | orchestrator | 2025-09-19 11:35:43.804193 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:35:43.804199 | orchestrator | Friday 19 September 2025 11:28:34 +0000 (0:00:00.288) 0:03:43.112 ****** 2025-09-19 11:35:43.804205 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.804212 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.804218 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.804223 | orchestrator | 2025-09-19 11:35:43.804235 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:35:43.804245 | orchestrator | Friday 19 September 2025 11:28:34 +0000 (0:00:00.373) 0:03:43.485 ****** 2025-09-19 11:35:43.804251 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.804257 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.804263 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.804269 | orchestrator | 2025-09-19 11:35:43.804275 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:35:43.804281 | orchestrator | Friday 19 September 2025 11:28:35 +0000 (0:00:00.336) 0:03:43.821 ****** 2025-09-19 11:35:43.804287 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.804294 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.804300 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.804306 | orchestrator | 2025-09-19 11:35:43.804312 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:35:43.804318 | orchestrator | Friday 19 September 2025 11:28:36 +0000 (0:00:01.067) 0:03:44.889 ****** 2025-09-19 11:35:43.804324 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.804334 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.804345 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.804356 | orchestrator | 2025-09-19 11:35:43.804367 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:35:43.804375 | orchestrator | Friday 19 September 2025 11:28:36 +0000 (0:00:00.277) 0:03:45.167 ****** 2025-09-19 11:35:43.804381 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.804387 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.804393 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.804399 | orchestrator | 2025-09-19 11:35:43.804406 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:35:43.804431 | orchestrator | Friday 19 September 2025 11:28:36 +0000 (0:00:00.286) 0:03:45.454 ****** 2025-09-19 11:35:43.804438 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.804444 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.804450 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.804456 | orchestrator | 2025-09-19 11:35:43.804463 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:35:43.804469 | orchestrator | Friday 19 September 2025 11:28:37 +0000 (0:00:00.913) 0:03:46.367 ****** 2025-09-19 11:35:43.804475 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.804481 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.804487 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.804493 | orchestrator | 2025-09-19 11:35:43.804499 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:35:43.804505 | orchestrator | Friday 19 September 2025 11:28:38 +0000 (0:00:00.820) 0:03:47.189 ****** 2025-09-19 11:35:43.804511 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.804517 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.804523 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.804529 | orchestrator | 2025-09-19 11:35:43.804535 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:35:43.804541 | orchestrator | Friday 19 September 2025 11:28:39 +0000 (0:00:00.647) 0:03:47.836 ****** 2025-09-19 11:35:43.804547 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.804553 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.804559 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.804565 | orchestrator | 2025-09-19 11:35:43.804571 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:35:43.804577 | orchestrator | Friday 19 September 2025 11:28:39 +0000 (0:00:00.566) 0:03:48.403 ****** 2025-09-19 11:35:43.804583 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.804589 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.804595 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.804601 | orchestrator | 2025-09-19 11:35:43.804611 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:35:43.804621 | orchestrator | Friday 19 September 2025 11:28:40 +0000 (0:00:00.603) 0:03:49.007 ****** 2025-09-19 11:35:43.804636 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.804685 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.804698 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.804708 | orchestrator | 2025-09-19 11:35:43.804717 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:35:43.804726 | orchestrator | Friday 19 September 2025 11:28:40 +0000 (0:00:00.448) 0:03:49.455 ****** 2025-09-19 11:35:43.804735 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.804746 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.804754 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.804762 | orchestrator | 2025-09-19 11:35:43.804771 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:35:43.804780 | orchestrator | Friday 19 September 2025 11:28:41 +0000 (0:00:00.783) 0:03:50.239 ****** 2025-09-19 11:35:43.804788 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.804796 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.804805 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.804814 | orchestrator | 2025-09-19 11:35:43.804823 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:35:43.804831 | orchestrator | Friday 19 September 2025 11:28:42 +0000 (0:00:00.444) 0:03:50.683 ****** 2025-09-19 11:35:43.804838 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.804846 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.804858 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.804870 | orchestrator | 2025-09-19 11:35:43.804878 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:35:43.804886 | orchestrator | Friday 19 September 2025 11:28:42 +0000 (0:00:00.417) 0:03:51.100 ****** 2025-09-19 11:35:43.804894 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.804904 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.804913 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.804922 | orchestrator | 2025-09-19 11:35:43.804931 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:35:43.804939 | orchestrator | Friday 19 September 2025 11:28:42 +0000 (0:00:00.383) 0:03:51.484 ****** 2025-09-19 11:35:43.804944 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.804950 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.804955 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.804963 | orchestrator | 2025-09-19 11:35:43.804972 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:35:43.804986 | orchestrator | Friday 19 September 2025 11:28:43 +0000 (0:00:00.811) 0:03:52.295 ****** 2025-09-19 11:35:43.804995 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.805004 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.805013 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.805019 | orchestrator | 2025-09-19 11:35:43.805024 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-19 11:35:43.805029 | orchestrator | Friday 19 September 2025 11:28:44 +0000 (0:00:00.555) 0:03:52.851 ****** 2025-09-19 11:35:43.805035 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.805040 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.805045 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.805050 | orchestrator | 2025-09-19 11:35:43.805056 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-19 11:35:43.805061 | orchestrator | Friday 19 September 2025 11:28:44 +0000 (0:00:00.321) 0:03:53.172 ****** 2025-09-19 11:35:43.805066 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.805072 | orchestrator | 2025-09-19 11:35:43.805077 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-19 11:35:43.805082 | orchestrator | Friday 19 September 2025 11:28:45 +0000 (0:00:00.735) 0:03:53.908 ****** 2025-09-19 11:35:43.805087 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.805093 | orchestrator | 2025-09-19 11:35:43.805104 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-19 11:35:43.805109 | orchestrator | Friday 19 September 2025 11:28:45 +0000 (0:00:00.156) 0:03:54.065 ****** 2025-09-19 11:35:43.805115 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-19 11:35:43.805121 | orchestrator | 2025-09-19 11:35:43.805169 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-19 11:35:43.805180 | orchestrator | Friday 19 September 2025 11:28:46 +0000 (0:00:01.138) 0:03:55.203 ****** 2025-09-19 11:35:43.805189 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.805195 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.805200 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.805206 | orchestrator | 2025-09-19 11:35:43.805211 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-19 11:35:43.805216 | orchestrator | Friday 19 September 2025 11:28:46 +0000 (0:00:00.347) 0:03:55.551 ****** 2025-09-19 11:35:43.805222 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.805227 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.805232 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.805237 | orchestrator | 2025-09-19 11:35:43.805243 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-19 11:35:43.805248 | orchestrator | Friday 19 September 2025 11:28:47 +0000 (0:00:00.485) 0:03:56.037 ****** 2025-09-19 11:35:43.805253 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.805259 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.805264 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.805269 | orchestrator | 2025-09-19 11:35:43.805275 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-19 11:35:43.805280 | orchestrator | Friday 19 September 2025 11:28:48 +0000 (0:00:01.381) 0:03:57.419 ****** 2025-09-19 11:35:43.805285 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.805291 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.805296 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.805301 | orchestrator | 2025-09-19 11:35:43.805307 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-19 11:35:43.805312 | orchestrator | Friday 19 September 2025 11:28:49 +0000 (0:00:00.850) 0:03:58.269 ****** 2025-09-19 11:35:43.805317 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.805323 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.805328 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.805333 | orchestrator | 2025-09-19 11:35:43.805339 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-19 11:35:43.805344 | orchestrator | Friday 19 September 2025 11:28:50 +0000 (0:00:00.664) 0:03:58.933 ****** 2025-09-19 11:35:43.805352 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.805361 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.805368 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.805376 | orchestrator | 2025-09-19 11:35:43.805384 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-19 11:35:43.805394 | orchestrator | Friday 19 September 2025 11:28:51 +0000 (0:00:00.899) 0:03:59.833 ****** 2025-09-19 11:35:43.805404 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.805412 | orchestrator | 2025-09-19 11:35:43.805421 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-19 11:35:43.805428 | orchestrator | Friday 19 September 2025 11:28:52 +0000 (0:00:01.294) 0:04:01.127 ****** 2025-09-19 11:35:43.805434 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.805439 | orchestrator | 2025-09-19 11:35:43.805445 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-19 11:35:43.805450 | orchestrator | Friday 19 September 2025 11:28:53 +0000 (0:00:00.679) 0:04:01.806 ****** 2025-09-19 11:35:43.805455 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:35:43.805461 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:35:43.805466 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:35:43.805478 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:35:43.805483 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-19 11:35:43.805489 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:35:43.805494 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:35:43.805499 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-19 11:35:43.805505 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-19 11:35:43.805510 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:35:43.805515 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-19 11:35:43.805520 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-19 11:35:43.805529 | orchestrator | 2025-09-19 11:35:43.805542 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-19 11:35:43.805554 | orchestrator | Friday 19 September 2025 11:28:56 +0000 (0:00:03.254) 0:04:05.061 ****** 2025-09-19 11:35:43.805566 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.805575 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.805583 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.805591 | orchestrator | 2025-09-19 11:35:43.805600 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-19 11:35:43.805608 | orchestrator | Friday 19 September 2025 11:28:57 +0000 (0:00:01.209) 0:04:06.271 ****** 2025-09-19 11:35:43.805617 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.805625 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.805632 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.805641 | orchestrator | 2025-09-19 11:35:43.805667 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-19 11:35:43.805676 | orchestrator | Friday 19 September 2025 11:28:58 +0000 (0:00:00.698) 0:04:06.969 ****** 2025-09-19 11:35:43.805685 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.805694 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.805703 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.805712 | orchestrator | 2025-09-19 11:35:43.805720 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-19 11:35:43.805728 | orchestrator | Friday 19 September 2025 11:28:58 +0000 (0:00:00.323) 0:04:07.293 ****** 2025-09-19 11:35:43.805734 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.805739 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.805745 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.805750 | orchestrator | 2025-09-19 11:35:43.805756 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-19 11:35:43.805787 | orchestrator | Friday 19 September 2025 11:29:00 +0000 (0:00:01.737) 0:04:09.031 ****** 2025-09-19 11:35:43.805794 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.805799 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.805805 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.805810 | orchestrator | 2025-09-19 11:35:43.805815 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-19 11:35:43.805821 | orchestrator | Friday 19 September 2025 11:29:01 +0000 (0:00:01.326) 0:04:10.357 ****** 2025-09-19 11:35:43.805826 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.805832 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.805840 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.805849 | orchestrator | 2025-09-19 11:35:43.805857 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-19 11:35:43.805866 | orchestrator | Friday 19 September 2025 11:29:01 +0000 (0:00:00.291) 0:04:10.649 ****** 2025-09-19 11:35:43.805875 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.805884 | orchestrator | 2025-09-19 11:35:43.805893 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-19 11:35:43.805903 | orchestrator | Friday 19 September 2025 11:29:02 +0000 (0:00:00.613) 0:04:11.263 ****** 2025-09-19 11:35:43.805919 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.805928 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.805937 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.805946 | orchestrator | 2025-09-19 11:35:43.805955 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-19 11:35:43.805964 | orchestrator | Friday 19 September 2025 11:29:02 +0000 (0:00:00.224) 0:04:11.487 ****** 2025-09-19 11:35:43.805975 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.805980 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.805985 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.805991 | orchestrator | 2025-09-19 11:35:43.805996 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-19 11:35:43.806002 | orchestrator | Friday 19 September 2025 11:29:03 +0000 (0:00:00.220) 0:04:11.708 ****** 2025-09-19 11:35:43.806007 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.806013 | orchestrator | 2025-09-19 11:35:43.806041 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-19 11:35:43.806047 | orchestrator | Friday 19 September 2025 11:29:03 +0000 (0:00:00.612) 0:04:12.321 ****** 2025-09-19 11:35:43.806052 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.806058 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.806063 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.806069 | orchestrator | 2025-09-19 11:35:43.806074 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-19 11:35:43.806080 | orchestrator | Friday 19 September 2025 11:29:05 +0000 (0:00:01.440) 0:04:13.761 ****** 2025-09-19 11:35:43.806085 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.806090 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.806096 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.806101 | orchestrator | 2025-09-19 11:35:43.806106 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-19 11:35:43.806112 | orchestrator | Friday 19 September 2025 11:29:06 +0000 (0:00:01.178) 0:04:14.939 ****** 2025-09-19 11:35:43.806117 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.806123 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.806128 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.806133 | orchestrator | 2025-09-19 11:35:43.806139 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-19 11:35:43.806144 | orchestrator | Friday 19 September 2025 11:29:08 +0000 (0:00:02.293) 0:04:17.233 ****** 2025-09-19 11:35:43.806149 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.806155 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.806160 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.806165 | orchestrator | 2025-09-19 11:35:43.806171 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-19 11:35:43.806176 | orchestrator | Friday 19 September 2025 11:29:10 +0000 (0:00:02.077) 0:04:19.311 ****** 2025-09-19 11:35:43.806181 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.806187 | orchestrator | 2025-09-19 11:35:43.806196 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-19 11:35:43.806202 | orchestrator | Friday 19 September 2025 11:29:11 +0000 (0:00:00.553) 0:04:19.864 ****** 2025-09-19 11:35:43.806207 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.806213 | orchestrator | 2025-09-19 11:35:43.806218 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-19 11:35:43.806223 | orchestrator | Friday 19 September 2025 11:29:12 +0000 (0:00:01.639) 0:04:21.504 ****** 2025-09-19 11:35:43.806229 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.806234 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.806239 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.806245 | orchestrator | 2025-09-19 11:35:43.806250 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-19 11:35:43.806260 | orchestrator | Friday 19 September 2025 11:29:21 +0000 (0:00:09.085) 0:04:30.589 ****** 2025-09-19 11:35:43.806266 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.806271 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.806276 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.806281 | orchestrator | 2025-09-19 11:35:43.806287 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-19 11:35:43.806292 | orchestrator | Friday 19 September 2025 11:29:22 +0000 (0:00:00.372) 0:04:30.962 ****** 2025-09-19 11:35:43.806320 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__609d1c998db4abf3a97de36ad462cda0d6005ad4'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-19 11:35:43.806329 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__609d1c998db4abf3a97de36ad462cda0d6005ad4'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-19 11:35:43.806339 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__609d1c998db4abf3a97de36ad462cda0d6005ad4'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-19 11:35:43.806349 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__609d1c998db4abf3a97de36ad462cda0d6005ad4'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-19 11:35:43.806360 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__609d1c998db4abf3a97de36ad462cda0d6005ad4'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-19 11:35:43.806370 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__609d1c998db4abf3a97de36ad462cda0d6005ad4'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__609d1c998db4abf3a97de36ad462cda0d6005ad4'}])  2025-09-19 11:35:43.806380 | orchestrator | 2025-09-19 11:35:43.806389 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 11:35:43.806398 | orchestrator | Friday 19 September 2025 11:29:37 +0000 (0:00:14.804) 0:04:45.766 ****** 2025-09-19 11:35:43.806408 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.806417 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.806427 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.806436 | orchestrator | 2025-09-19 11:35:43.806442 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-19 11:35:43.806447 | orchestrator | Friday 19 September 2025 11:29:37 +0000 (0:00:00.402) 0:04:46.169 ****** 2025-09-19 11:35:43.806452 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.806458 | orchestrator | 2025-09-19 11:35:43.806463 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-19 11:35:43.806474 | orchestrator | Friday 19 September 2025 11:29:38 +0000 (0:00:00.557) 0:04:46.727 ****** 2025-09-19 11:35:43.806479 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.806485 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.806490 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.806495 | orchestrator | 2025-09-19 11:35:43.806504 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-19 11:35:43.806509 | orchestrator | Friday 19 September 2025 11:29:38 +0000 (0:00:00.451) 0:04:47.178 ****** 2025-09-19 11:35:43.806515 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.806520 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.806525 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.806530 | orchestrator | 2025-09-19 11:35:43.806536 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-19 11:35:43.806541 | orchestrator | Friday 19 September 2025 11:29:38 +0000 (0:00:00.350) 0:04:47.529 ****** 2025-09-19 11:35:43.806547 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 11:35:43.806552 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 11:35:43.806558 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 11:35:43.806567 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.806575 | orchestrator | 2025-09-19 11:35:43.806585 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-19 11:35:43.806594 | orchestrator | Friday 19 September 2025 11:29:39 +0000 (0:00:00.575) 0:04:48.105 ****** 2025-09-19 11:35:43.806603 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.806612 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.806618 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.806627 | orchestrator | 2025-09-19 11:35:43.806635 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-19 11:35:43.806644 | orchestrator | 2025-09-19 11:35:43.806714 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:35:43.806724 | orchestrator | Friday 19 September 2025 11:29:39 +0000 (0:00:00.512) 0:04:48.618 ****** 2025-09-19 11:35:43.806760 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.806767 | orchestrator | 2025-09-19 11:35:43.806772 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:35:43.806777 | orchestrator | Friday 19 September 2025 11:29:40 +0000 (0:00:00.676) 0:04:49.294 ****** 2025-09-19 11:35:43.806782 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.806787 | orchestrator | 2025-09-19 11:35:43.806791 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:35:43.806796 | orchestrator | Friday 19 September 2025 11:29:41 +0000 (0:00:00.463) 0:04:49.758 ****** 2025-09-19 11:35:43.806801 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.806806 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.806810 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.806815 | orchestrator | 2025-09-19 11:35:43.806820 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:35:43.806825 | orchestrator | Friday 19 September 2025 11:29:42 +0000 (0:00:00.983) 0:04:50.741 ****** 2025-09-19 11:35:43.806830 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.806834 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.806839 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.806844 | orchestrator | 2025-09-19 11:35:43.806848 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:35:43.806853 | orchestrator | Friday 19 September 2025 11:29:42 +0000 (0:00:00.346) 0:04:51.087 ****** 2025-09-19 11:35:43.806858 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.806863 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.806867 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.806877 | orchestrator | 2025-09-19 11:35:43.806882 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:35:43.806887 | orchestrator | Friday 19 September 2025 11:29:42 +0000 (0:00:00.274) 0:04:51.362 ****** 2025-09-19 11:35:43.806891 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.806896 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.806901 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.806906 | orchestrator | 2025-09-19 11:35:43.806910 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:35:43.806915 | orchestrator | Friday 19 September 2025 11:29:42 +0000 (0:00:00.276) 0:04:51.638 ****** 2025-09-19 11:35:43.806920 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.806925 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.806929 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.806934 | orchestrator | 2025-09-19 11:35:43.806939 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:35:43.806944 | orchestrator | Friday 19 September 2025 11:29:43 +0000 (0:00:01.010) 0:04:52.649 ****** 2025-09-19 11:35:43.806948 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.806953 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.806958 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.806962 | orchestrator | 2025-09-19 11:35:43.806967 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:35:43.806972 | orchestrator | Friday 19 September 2025 11:29:44 +0000 (0:00:00.293) 0:04:52.943 ****** 2025-09-19 11:35:43.806977 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.806982 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.806986 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.806991 | orchestrator | 2025-09-19 11:35:43.806996 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:35:43.807001 | orchestrator | Friday 19 September 2025 11:29:44 +0000 (0:00:00.299) 0:04:53.243 ****** 2025-09-19 11:35:43.807005 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.807010 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.807015 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.807019 | orchestrator | 2025-09-19 11:35:43.807024 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:35:43.807029 | orchestrator | Friday 19 September 2025 11:29:45 +0000 (0:00:00.685) 0:04:53.928 ****** 2025-09-19 11:35:43.807034 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.807038 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.807043 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.807048 | orchestrator | 2025-09-19 11:35:43.807052 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:35:43.807061 | orchestrator | Friday 19 September 2025 11:29:46 +0000 (0:00:01.050) 0:04:54.979 ****** 2025-09-19 11:35:43.807066 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.807071 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.807076 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.807080 | orchestrator | 2025-09-19 11:35:43.807085 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:35:43.807090 | orchestrator | Friday 19 September 2025 11:29:46 +0000 (0:00:00.280) 0:04:55.259 ****** 2025-09-19 11:35:43.807095 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.807099 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.807104 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.807109 | orchestrator | 2025-09-19 11:35:43.807113 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:35:43.807118 | orchestrator | Friday 19 September 2025 11:29:46 +0000 (0:00:00.342) 0:04:55.602 ****** 2025-09-19 11:35:43.807123 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.807128 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.807132 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.807137 | orchestrator | 2025-09-19 11:35:43.807142 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:35:43.807150 | orchestrator | Friday 19 September 2025 11:29:47 +0000 (0:00:00.289) 0:04:55.891 ****** 2025-09-19 11:35:43.807155 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.807160 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.807164 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.807169 | orchestrator | 2025-09-19 11:35:43.807174 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:35:43.807179 | orchestrator | Friday 19 September 2025 11:29:47 +0000 (0:00:00.474) 0:04:56.366 ****** 2025-09-19 11:35:43.807197 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.807202 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.807207 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.807212 | orchestrator | 2025-09-19 11:35:43.807217 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:35:43.807221 | orchestrator | Friday 19 September 2025 11:29:48 +0000 (0:00:00.292) 0:04:56.658 ****** 2025-09-19 11:35:43.807226 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.807231 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.807236 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.807240 | orchestrator | 2025-09-19 11:35:43.807245 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:35:43.807250 | orchestrator | Friday 19 September 2025 11:29:48 +0000 (0:00:00.272) 0:04:56.930 ****** 2025-09-19 11:35:43.807255 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.807259 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.807264 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.807269 | orchestrator | 2025-09-19 11:35:43.807273 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:35:43.807278 | orchestrator | Friday 19 September 2025 11:29:48 +0000 (0:00:00.267) 0:04:57.198 ****** 2025-09-19 11:35:43.807283 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.807288 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.807292 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.807297 | orchestrator | 2025-09-19 11:35:43.807302 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:35:43.807306 | orchestrator | Friday 19 September 2025 11:29:49 +0000 (0:00:00.499) 0:04:57.697 ****** 2025-09-19 11:35:43.807311 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.807316 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.807320 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.807325 | orchestrator | 2025-09-19 11:35:43.807330 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:35:43.807335 | orchestrator | Friday 19 September 2025 11:29:49 +0000 (0:00:00.320) 0:04:58.018 ****** 2025-09-19 11:35:43.807339 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.807344 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.807349 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.807353 | orchestrator | 2025-09-19 11:35:43.807358 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-19 11:35:43.807363 | orchestrator | Friday 19 September 2025 11:29:49 +0000 (0:00:00.503) 0:04:58.522 ****** 2025-09-19 11:35:43.807368 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 11:35:43.807373 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:35:43.807377 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:35:43.807382 | orchestrator | 2025-09-19 11:35:43.807387 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-19 11:35:43.807392 | orchestrator | Friday 19 September 2025 11:29:50 +0000 (0:00:00.781) 0:04:59.303 ****** 2025-09-19 11:35:43.807397 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.807401 | orchestrator | 2025-09-19 11:35:43.807406 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-19 11:35:43.807418 | orchestrator | Friday 19 September 2025 11:29:51 +0000 (0:00:00.651) 0:04:59.954 ****** 2025-09-19 11:35:43.807423 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.807428 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.807432 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.807437 | orchestrator | 2025-09-19 11:35:43.807442 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-19 11:35:43.807447 | orchestrator | Friday 19 September 2025 11:29:51 +0000 (0:00:00.627) 0:05:00.582 ****** 2025-09-19 11:35:43.807451 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.807456 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.807461 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.807465 | orchestrator | 2025-09-19 11:35:43.807470 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-19 11:35:43.807475 | orchestrator | Friday 19 September 2025 11:29:52 +0000 (0:00:00.299) 0:05:00.881 ****** 2025-09-19 11:35:43.807480 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:35:43.807485 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:35:43.807492 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:35:43.807497 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-19 11:35:43.807502 | orchestrator | 2025-09-19 11:35:43.807506 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-19 11:35:43.807511 | orchestrator | Friday 19 September 2025 11:30:03 +0000 (0:00:11.000) 0:05:11.882 ****** 2025-09-19 11:35:43.807516 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.807521 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.807525 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.807530 | orchestrator | 2025-09-19 11:35:43.807535 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-19 11:35:43.807540 | orchestrator | Friday 19 September 2025 11:30:03 +0000 (0:00:00.527) 0:05:12.409 ****** 2025-09-19 11:35:43.807544 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 11:35:43.807549 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 11:35:43.807554 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 11:35:43.807559 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-19 11:35:43.807563 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:35:43.807568 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:35:43.807573 | orchestrator | 2025-09-19 11:35:43.807577 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-19 11:35:43.807582 | orchestrator | Friday 19 September 2025 11:30:05 +0000 (0:00:02.159) 0:05:14.569 ****** 2025-09-19 11:35:43.807587 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 11:35:43.807592 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 11:35:43.807610 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 11:35:43.807615 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:35:43.807620 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-19 11:35:43.807625 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-19 11:35:43.807629 | orchestrator | 2025-09-19 11:35:43.807634 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-19 11:35:43.807639 | orchestrator | Friday 19 September 2025 11:30:07 +0000 (0:00:01.248) 0:05:15.817 ****** 2025-09-19 11:35:43.807643 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.807664 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.807672 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.807680 | orchestrator | 2025-09-19 11:35:43.807687 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-19 11:35:43.807696 | orchestrator | Friday 19 September 2025 11:30:07 +0000 (0:00:00.694) 0:05:16.511 ****** 2025-09-19 11:35:43.807701 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.807711 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.807715 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.807720 | orchestrator | 2025-09-19 11:35:43.807725 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-19 11:35:43.807730 | orchestrator | Friday 19 September 2025 11:30:08 +0000 (0:00:00.476) 0:05:16.988 ****** 2025-09-19 11:35:43.807734 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.807739 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.807744 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.807748 | orchestrator | 2025-09-19 11:35:43.807753 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-19 11:35:43.807758 | orchestrator | Friday 19 September 2025 11:30:08 +0000 (0:00:00.292) 0:05:17.280 ****** 2025-09-19 11:35:43.807762 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.807767 | orchestrator | 2025-09-19 11:35:43.807772 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-19 11:35:43.807777 | orchestrator | Friday 19 September 2025 11:30:09 +0000 (0:00:00.487) 0:05:17.767 ****** 2025-09-19 11:35:43.807781 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.807786 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.807791 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.807795 | orchestrator | 2025-09-19 11:35:43.807800 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-19 11:35:43.807805 | orchestrator | Friday 19 September 2025 11:30:09 +0000 (0:00:00.612) 0:05:18.380 ****** 2025-09-19 11:35:43.807809 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.807814 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.807819 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.807823 | orchestrator | 2025-09-19 11:35:43.807828 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-19 11:35:43.807833 | orchestrator | Friday 19 September 2025 11:30:10 +0000 (0:00:00.331) 0:05:18.711 ****** 2025-09-19 11:35:43.807837 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.807842 | orchestrator | 2025-09-19 11:35:43.807847 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-19 11:35:43.807852 | orchestrator | Friday 19 September 2025 11:30:10 +0000 (0:00:00.388) 0:05:19.099 ****** 2025-09-19 11:35:43.807856 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.807861 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.807869 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.807877 | orchestrator | 2025-09-19 11:35:43.807885 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-19 11:35:43.807892 | orchestrator | Friday 19 September 2025 11:30:11 +0000 (0:00:01.416) 0:05:20.516 ****** 2025-09-19 11:35:43.807900 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.807908 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.807916 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.807925 | orchestrator | 2025-09-19 11:35:43.807933 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-19 11:35:43.807941 | orchestrator | Friday 19 September 2025 11:30:13 +0000 (0:00:01.207) 0:05:21.724 ****** 2025-09-19 11:35:43.807949 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.807957 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.807968 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.807977 | orchestrator | 2025-09-19 11:35:43.807986 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-19 11:35:43.807994 | orchestrator | Friday 19 September 2025 11:30:14 +0000 (0:00:01.710) 0:05:23.435 ****** 2025-09-19 11:35:43.808002 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.808009 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.808016 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.808029 | orchestrator | 2025-09-19 11:35:43.808038 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-19 11:35:43.808045 | orchestrator | Friday 19 September 2025 11:30:16 +0000 (0:00:01.967) 0:05:25.403 ****** 2025-09-19 11:35:43.808050 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.808055 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.808060 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-19 11:35:43.808064 | orchestrator | 2025-09-19 11:35:43.808069 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-19 11:35:43.808074 | orchestrator | Friday 19 September 2025 11:30:17 +0000 (0:00:00.618) 0:05:26.021 ****** 2025-09-19 11:35:43.808078 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-19 11:35:43.808083 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-19 11:35:43.808088 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-19 11:35:43.808110 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-19 11:35:43.808116 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-19 11:35:43.808121 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:35:43.808125 | orchestrator | 2025-09-19 11:35:43.808130 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-19 11:35:43.808135 | orchestrator | Friday 19 September 2025 11:30:47 +0000 (0:00:30.180) 0:05:56.202 ****** 2025-09-19 11:35:43.808140 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:35:43.808144 | orchestrator | 2025-09-19 11:35:43.808149 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-19 11:35:43.808154 | orchestrator | Friday 19 September 2025 11:30:48 +0000 (0:00:01.363) 0:05:57.566 ****** 2025-09-19 11:35:43.808159 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.808163 | orchestrator | 2025-09-19 11:35:43.808168 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-19 11:35:43.808173 | orchestrator | Friday 19 September 2025 11:30:49 +0000 (0:00:00.306) 0:05:57.872 ****** 2025-09-19 11:35:43.808178 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.808182 | orchestrator | 2025-09-19 11:35:43.808187 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-19 11:35:43.808192 | orchestrator | Friday 19 September 2025 11:30:49 +0000 (0:00:00.141) 0:05:58.014 ****** 2025-09-19 11:35:43.808197 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-19 11:35:43.808201 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-19 11:35:43.808206 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-19 11:35:43.808211 | orchestrator | 2025-09-19 11:35:43.808215 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-19 11:35:43.808220 | orchestrator | Friday 19 September 2025 11:30:55 +0000 (0:00:06.444) 0:06:04.458 ****** 2025-09-19 11:35:43.808225 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-19 11:35:43.808230 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-19 11:35:43.808235 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-19 11:35:43.808239 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-19 11:35:43.808244 | orchestrator | 2025-09-19 11:35:43.808249 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 11:35:43.808254 | orchestrator | Friday 19 September 2025 11:31:01 +0000 (0:00:05.236) 0:06:09.694 ****** 2025-09-19 11:35:43.808258 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.808263 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.808270 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.808275 | orchestrator | 2025-09-19 11:35:43.808280 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-19 11:35:43.808285 | orchestrator | Friday 19 September 2025 11:31:01 +0000 (0:00:00.704) 0:06:10.399 ****** 2025-09-19 11:35:43.808290 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:35:43.808294 | orchestrator | 2025-09-19 11:35:43.808299 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-19 11:35:43.808304 | orchestrator | Friday 19 September 2025 11:31:02 +0000 (0:00:00.581) 0:06:10.980 ****** 2025-09-19 11:35:43.808309 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.808313 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.808318 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.808323 | orchestrator | 2025-09-19 11:35:43.808328 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-19 11:35:43.808333 | orchestrator | Friday 19 September 2025 11:31:02 +0000 (0:00:00.640) 0:06:11.621 ****** 2025-09-19 11:35:43.808337 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.808342 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.808347 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.808352 | orchestrator | 2025-09-19 11:35:43.808356 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-19 11:35:43.808364 | orchestrator | Friday 19 September 2025 11:31:04 +0000 (0:00:01.308) 0:06:12.929 ****** 2025-09-19 11:35:43.808369 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 11:35:43.808374 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 11:35:43.808379 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 11:35:43.808383 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.808388 | orchestrator | 2025-09-19 11:35:43.808393 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-19 11:35:43.808398 | orchestrator | Friday 19 September 2025 11:31:04 +0000 (0:00:00.672) 0:06:13.602 ****** 2025-09-19 11:35:43.808402 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.808407 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.808412 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.808417 | orchestrator | 2025-09-19 11:35:43.808422 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-19 11:35:43.808426 | orchestrator | 2025-09-19 11:35:43.808431 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:35:43.808436 | orchestrator | Friday 19 September 2025 11:31:05 +0000 (0:00:00.938) 0:06:14.541 ****** 2025-09-19 11:35:43.808441 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.808446 | orchestrator | 2025-09-19 11:35:43.808450 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:35:43.808455 | orchestrator | Friday 19 September 2025 11:31:06 +0000 (0:00:00.529) 0:06:15.071 ****** 2025-09-19 11:35:43.808474 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.808479 | orchestrator | 2025-09-19 11:35:43.808484 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:35:43.808489 | orchestrator | Friday 19 September 2025 11:31:07 +0000 (0:00:00.859) 0:06:15.930 ****** 2025-09-19 11:35:43.808494 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.808498 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.808503 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.808508 | orchestrator | 2025-09-19 11:35:43.808512 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:35:43.808517 | orchestrator | Friday 19 September 2025 11:31:07 +0000 (0:00:00.461) 0:06:16.392 ****** 2025-09-19 11:35:43.808526 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.808531 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.808536 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.808541 | orchestrator | 2025-09-19 11:35:43.808545 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:35:43.808550 | orchestrator | Friday 19 September 2025 11:31:08 +0000 (0:00:00.792) 0:06:17.184 ****** 2025-09-19 11:35:43.808555 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.808559 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.808564 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.808569 | orchestrator | 2025-09-19 11:35:43.808574 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:35:43.808579 | orchestrator | Friday 19 September 2025 11:31:09 +0000 (0:00:00.839) 0:06:18.024 ****** 2025-09-19 11:35:43.808583 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.808588 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.808593 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.808597 | orchestrator | 2025-09-19 11:35:43.808602 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:35:43.808607 | orchestrator | Friday 19 September 2025 11:31:10 +0000 (0:00:01.210) 0:06:19.235 ****** 2025-09-19 11:35:43.808612 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.808617 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.808621 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.808626 | orchestrator | 2025-09-19 11:35:43.808631 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:35:43.808636 | orchestrator | Friday 19 September 2025 11:31:10 +0000 (0:00:00.339) 0:06:19.574 ****** 2025-09-19 11:35:43.808640 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.808645 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.808665 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.808670 | orchestrator | 2025-09-19 11:35:43.808675 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:35:43.808680 | orchestrator | Friday 19 September 2025 11:31:11 +0000 (0:00:00.325) 0:06:19.900 ****** 2025-09-19 11:35:43.808684 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.808689 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.808694 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.808699 | orchestrator | 2025-09-19 11:35:43.808703 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:35:43.808708 | orchestrator | Friday 19 September 2025 11:31:11 +0000 (0:00:00.314) 0:06:20.214 ****** 2025-09-19 11:35:43.808713 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.808718 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.808722 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.808727 | orchestrator | 2025-09-19 11:35:43.808732 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:35:43.808737 | orchestrator | Friday 19 September 2025 11:31:12 +0000 (0:00:01.033) 0:06:21.248 ****** 2025-09-19 11:35:43.808741 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.808746 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.808751 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.808755 | orchestrator | 2025-09-19 11:35:43.808760 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:35:43.808765 | orchestrator | Friday 19 September 2025 11:31:13 +0000 (0:00:00.649) 0:06:21.898 ****** 2025-09-19 11:35:43.808770 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.808774 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.808779 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.808784 | orchestrator | 2025-09-19 11:35:43.808789 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:35:43.808794 | orchestrator | Friday 19 September 2025 11:31:13 +0000 (0:00:00.368) 0:06:22.266 ****** 2025-09-19 11:35:43.808801 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.808806 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.808814 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.808819 | orchestrator | 2025-09-19 11:35:43.808824 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:35:43.808829 | orchestrator | Friday 19 September 2025 11:31:13 +0000 (0:00:00.346) 0:06:22.612 ****** 2025-09-19 11:35:43.808834 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.808838 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.808843 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.808848 | orchestrator | 2025-09-19 11:35:43.808853 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:35:43.808857 | orchestrator | Friday 19 September 2025 11:31:14 +0000 (0:00:00.540) 0:06:23.152 ****** 2025-09-19 11:35:43.808862 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.808867 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.808872 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.808876 | orchestrator | 2025-09-19 11:35:43.808881 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:35:43.808886 | orchestrator | Friday 19 September 2025 11:31:14 +0000 (0:00:00.315) 0:06:23.468 ****** 2025-09-19 11:35:43.808891 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.808895 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.808900 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.808905 | orchestrator | 2025-09-19 11:35:43.808909 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:35:43.808914 | orchestrator | Friday 19 September 2025 11:31:15 +0000 (0:00:00.378) 0:06:23.846 ****** 2025-09-19 11:35:43.808919 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.808924 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.808929 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.808933 | orchestrator | 2025-09-19 11:35:43.808941 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:35:43.808946 | orchestrator | Friday 19 September 2025 11:31:15 +0000 (0:00:00.377) 0:06:24.223 ****** 2025-09-19 11:35:43.808950 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.808955 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.808960 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.808964 | orchestrator | 2025-09-19 11:35:43.808969 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:35:43.808974 | orchestrator | Friday 19 September 2025 11:31:16 +0000 (0:00:00.773) 0:06:24.997 ****** 2025-09-19 11:35:43.808979 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.808983 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.808988 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.808993 | orchestrator | 2025-09-19 11:35:43.808997 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:35:43.809002 | orchestrator | Friday 19 September 2025 11:31:16 +0000 (0:00:00.381) 0:06:25.379 ****** 2025-09-19 11:35:43.809007 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.809012 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.809016 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.809021 | orchestrator | 2025-09-19 11:35:43.809026 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:35:43.809031 | orchestrator | Friday 19 September 2025 11:31:17 +0000 (0:00:00.366) 0:06:25.745 ****** 2025-09-19 11:35:43.809035 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.809040 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.809045 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.809049 | orchestrator | 2025-09-19 11:35:43.809054 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-19 11:35:43.809059 | orchestrator | Friday 19 September 2025 11:31:17 +0000 (0:00:00.553) 0:06:26.299 ****** 2025-09-19 11:35:43.809064 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.809069 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.809073 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.809078 | orchestrator | 2025-09-19 11:35:43.809083 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-19 11:35:43.809091 | orchestrator | Friday 19 September 2025 11:31:18 +0000 (0:00:00.733) 0:06:27.032 ****** 2025-09-19 11:35:43.809096 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:35:43.809100 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:35:43.809105 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:35:43.809110 | orchestrator | 2025-09-19 11:35:43.809115 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-19 11:35:43.809119 | orchestrator | Friday 19 September 2025 11:31:19 +0000 (0:00:00.662) 0:06:27.695 ****** 2025-09-19 11:35:43.809124 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.809129 | orchestrator | 2025-09-19 11:35:43.809134 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-19 11:35:43.809138 | orchestrator | Friday 19 September 2025 11:31:19 +0000 (0:00:00.516) 0:06:28.211 ****** 2025-09-19 11:35:43.809143 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.809148 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.809152 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.809157 | orchestrator | 2025-09-19 11:35:43.809162 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-19 11:35:43.809167 | orchestrator | Friday 19 September 2025 11:31:20 +0000 (0:00:00.588) 0:06:28.800 ****** 2025-09-19 11:35:43.809171 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.809176 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.809181 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.809185 | orchestrator | 2025-09-19 11:35:43.809190 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-19 11:35:43.809195 | orchestrator | Friday 19 September 2025 11:31:20 +0000 (0:00:00.334) 0:06:29.134 ****** 2025-09-19 11:35:43.809200 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.809204 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.809209 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.809214 | orchestrator | 2025-09-19 11:35:43.809221 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-19 11:35:43.809226 | orchestrator | Friday 19 September 2025 11:31:21 +0000 (0:00:00.642) 0:06:29.777 ****** 2025-09-19 11:35:43.809230 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.809235 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.809240 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.809245 | orchestrator | 2025-09-19 11:35:43.809250 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-19 11:35:43.809254 | orchestrator | Friday 19 September 2025 11:31:21 +0000 (0:00:00.333) 0:06:30.111 ****** 2025-09-19 11:35:43.809259 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 11:35:43.809264 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 11:35:43.809269 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 11:35:43.809273 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 11:35:43.809278 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 11:35:43.809283 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 11:35:43.809288 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 11:35:43.809292 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 11:35:43.809301 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 11:35:43.809320 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 11:35:43.809325 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 11:35:43.809330 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 11:35:43.809334 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 11:35:43.809339 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 11:35:43.809344 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 11:35:43.809348 | orchestrator | 2025-09-19 11:35:43.809353 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-19 11:35:43.809358 | orchestrator | Friday 19 September 2025 11:31:24 +0000 (0:00:02.600) 0:06:32.711 ****** 2025-09-19 11:35:43.809363 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.809367 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.809372 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.809377 | orchestrator | 2025-09-19 11:35:43.809382 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-19 11:35:43.809386 | orchestrator | Friday 19 September 2025 11:31:24 +0000 (0:00:00.357) 0:06:33.069 ****** 2025-09-19 11:35:43.809391 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.809396 | orchestrator | 2025-09-19 11:35:43.809400 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-19 11:35:43.809405 | orchestrator | Friday 19 September 2025 11:31:24 +0000 (0:00:00.526) 0:06:33.595 ****** 2025-09-19 11:35:43.809410 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 11:35:43.809414 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 11:35:43.809419 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 11:35:43.809424 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-19 11:35:43.809429 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-19 11:35:43.809433 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-19 11:35:43.809438 | orchestrator | 2025-09-19 11:35:43.809443 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-19 11:35:43.809448 | orchestrator | Friday 19 September 2025 11:31:26 +0000 (0:00:01.328) 0:06:34.924 ****** 2025-09-19 11:35:43.809452 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:35:43.809457 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 11:35:43.809462 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:35:43.809466 | orchestrator | 2025-09-19 11:35:43.809471 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-19 11:35:43.809476 | orchestrator | Friday 19 September 2025 11:31:28 +0000 (0:00:01.853) 0:06:36.777 ****** 2025-09-19 11:35:43.809481 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:35:43.809485 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 11:35:43.809490 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.809495 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:35:43.809499 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 11:35:43.809504 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.809509 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:35:43.809513 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 11:35:43.809518 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.809523 | orchestrator | 2025-09-19 11:35:43.809528 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-19 11:35:43.809532 | orchestrator | Friday 19 September 2025 11:31:29 +0000 (0:00:01.349) 0:06:38.127 ****** 2025-09-19 11:35:43.809540 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:35:43.809545 | orchestrator | 2025-09-19 11:35:43.809553 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-19 11:35:43.809558 | orchestrator | Friday 19 September 2025 11:31:31 +0000 (0:00:02.296) 0:06:40.423 ****** 2025-09-19 11:35:43.809562 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.809567 | orchestrator | 2025-09-19 11:35:43.809572 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-19 11:35:43.809577 | orchestrator | Friday 19 September 2025 11:31:32 +0000 (0:00:00.571) 0:06:40.995 ****** 2025-09-19 11:35:43.809582 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6', 'data_vg': 'ceph-4ec87955-83d4-5f81-a4e3-fa3184f5f6e6'}) 2025-09-19 11:35:43.809587 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-499bb3ba-5d36-55d4-9ab4-77fea8769c5a', 'data_vg': 'ceph-499bb3ba-5d36-55d4-9ab4-77fea8769c5a'}) 2025-09-19 11:35:43.809592 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f2e5a9ae-16db-5885-a5f1-5293896cd0a9', 'data_vg': 'ceph-f2e5a9ae-16db-5885-a5f1-5293896cd0a9'}) 2025-09-19 11:35:43.809597 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-9f018b0b-9dc8-5104-9bc9-2c288294c8fd', 'data_vg': 'ceph-9f018b0b-9dc8-5104-9bc9-2c288294c8fd'}) 2025-09-19 11:35:43.809605 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-482defc3-95b3-50a2-a4e9-5dea1f7a25a6', 'data_vg': 'ceph-482defc3-95b3-50a2-a4e9-5dea1f7a25a6'}) 2025-09-19 11:35:43.809610 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5', 'data_vg': 'ceph-d15bf0b7-095a-52ef-97a5-c7d3cf055ef5'}) 2025-09-19 11:35:43.809615 | orchestrator | 2025-09-19 11:35:43.809619 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-19 11:35:43.809624 | orchestrator | Friday 19 September 2025 11:32:15 +0000 (0:00:43.304) 0:07:24.300 ****** 2025-09-19 11:35:43.809629 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.809634 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.809639 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.809643 | orchestrator | 2025-09-19 11:35:43.809662 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-19 11:35:43.809667 | orchestrator | Friday 19 September 2025 11:32:15 +0000 (0:00:00.309) 0:07:24.610 ****** 2025-09-19 11:35:43.809672 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.809677 | orchestrator | 2025-09-19 11:35:43.809682 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-19 11:35:43.809686 | orchestrator | Friday 19 September 2025 11:32:16 +0000 (0:00:00.476) 0:07:25.087 ****** 2025-09-19 11:35:43.809691 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.809696 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.809701 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.809705 | orchestrator | 2025-09-19 11:35:43.809710 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-19 11:35:43.809715 | orchestrator | Friday 19 September 2025 11:32:17 +0000 (0:00:00.899) 0:07:25.986 ****** 2025-09-19 11:35:43.809719 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.809724 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.809729 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.809733 | orchestrator | 2025-09-19 11:35:43.809738 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-19 11:35:43.809743 | orchestrator | Friday 19 September 2025 11:32:19 +0000 (0:00:02.586) 0:07:28.572 ****** 2025-09-19 11:35:43.809748 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.809753 | orchestrator | 2025-09-19 11:35:43.809757 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-19 11:35:43.809768 | orchestrator | Friday 19 September 2025 11:32:20 +0000 (0:00:00.447) 0:07:29.020 ****** 2025-09-19 11:35:43.809772 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.809777 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.809782 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.809787 | orchestrator | 2025-09-19 11:35:43.809791 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-19 11:35:43.809796 | orchestrator | Friday 19 September 2025 11:32:21 +0000 (0:00:01.410) 0:07:30.430 ****** 2025-09-19 11:35:43.809801 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.809806 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.809810 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.809815 | orchestrator | 2025-09-19 11:35:43.809820 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-19 11:35:43.809825 | orchestrator | Friday 19 September 2025 11:32:22 +0000 (0:00:01.177) 0:07:31.608 ****** 2025-09-19 11:35:43.809829 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.809834 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.809839 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.809843 | orchestrator | 2025-09-19 11:35:43.809848 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-19 11:35:43.809853 | orchestrator | Friday 19 September 2025 11:32:24 +0000 (0:00:01.672) 0:07:33.281 ****** 2025-09-19 11:35:43.809858 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.809862 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.809867 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.809872 | orchestrator | 2025-09-19 11:35:43.809877 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-19 11:35:43.809881 | orchestrator | Friday 19 September 2025 11:32:24 +0000 (0:00:00.357) 0:07:33.638 ****** 2025-09-19 11:35:43.809886 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.809891 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.809899 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.809903 | orchestrator | 2025-09-19 11:35:43.809908 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-19 11:35:43.809913 | orchestrator | Friday 19 September 2025 11:32:25 +0000 (0:00:00.688) 0:07:34.327 ****** 2025-09-19 11:35:43.809918 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 11:35:43.809923 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-09-19 11:35:43.809927 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-09-19 11:35:43.809932 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-09-19 11:35:43.809937 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-09-19 11:35:43.809941 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-19 11:35:43.809946 | orchestrator | 2025-09-19 11:35:43.809951 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-19 11:35:43.809956 | orchestrator | Friday 19 September 2025 11:32:26 +0000 (0:00:01.062) 0:07:35.390 ****** 2025-09-19 11:35:43.809960 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-19 11:35:43.809965 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-19 11:35:43.809970 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-19 11:35:43.809974 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-09-19 11:35:43.809979 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-19 11:35:43.809984 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-19 11:35:43.809989 | orchestrator | 2025-09-19 11:35:43.809993 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-19 11:35:43.809998 | orchestrator | Friday 19 September 2025 11:32:28 +0000 (0:00:02.110) 0:07:37.500 ****** 2025-09-19 11:35:43.810003 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-19 11:35:43.810008 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-19 11:35:43.810045 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-09-19 11:35:43.810052 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-09-19 11:35:43.810060 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-09-19 11:35:43.810065 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-19 11:35:43.810070 | orchestrator | 2025-09-19 11:35:43.810075 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-19 11:35:43.810079 | orchestrator | Friday 19 September 2025 11:32:33 +0000 (0:00:04.445) 0:07:41.946 ****** 2025-09-19 11:35:43.810084 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810089 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.810094 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:35:43.810098 | orchestrator | 2025-09-19 11:35:43.810103 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-19 11:35:43.810108 | orchestrator | Friday 19 September 2025 11:32:36 +0000 (0:00:03.391) 0:07:45.338 ****** 2025-09-19 11:35:43.810113 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810117 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.810122 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-19 11:35:43.810127 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:35:43.810131 | orchestrator | 2025-09-19 11:35:43.810136 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-19 11:35:43.810141 | orchestrator | Friday 19 September 2025 11:32:49 +0000 (0:00:12.509) 0:07:57.847 ****** 2025-09-19 11:35:43.810146 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810150 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.810155 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.810160 | orchestrator | 2025-09-19 11:35:43.810165 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 11:35:43.810169 | orchestrator | Friday 19 September 2025 11:32:50 +0000 (0:00:01.147) 0:07:58.994 ****** 2025-09-19 11:35:43.810174 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810179 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.810184 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.810188 | orchestrator | 2025-09-19 11:35:43.810193 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-19 11:35:43.810198 | orchestrator | Friday 19 September 2025 11:32:50 +0000 (0:00:00.354) 0:07:59.349 ****** 2025-09-19 11:35:43.810202 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.810207 | orchestrator | 2025-09-19 11:35:43.810212 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-19 11:35:43.810217 | orchestrator | Friday 19 September 2025 11:32:51 +0000 (0:00:00.547) 0:07:59.896 ****** 2025-09-19 11:35:43.810222 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:35:43.810226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:35:43.810231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:35:43.810236 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810240 | orchestrator | 2025-09-19 11:35:43.810245 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-19 11:35:43.810250 | orchestrator | Friday 19 September 2025 11:32:52 +0000 (0:00:01.012) 0:08:00.909 ****** 2025-09-19 11:35:43.810255 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810259 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.810264 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.810269 | orchestrator | 2025-09-19 11:35:43.810273 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-19 11:35:43.810278 | orchestrator | Friday 19 September 2025 11:32:52 +0000 (0:00:00.353) 0:08:01.263 ****** 2025-09-19 11:35:43.810283 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810288 | orchestrator | 2025-09-19 11:35:43.810292 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-19 11:35:43.810300 | orchestrator | Friday 19 September 2025 11:32:52 +0000 (0:00:00.220) 0:08:01.483 ****** 2025-09-19 11:35:43.810305 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810310 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.810315 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.810319 | orchestrator | 2025-09-19 11:35:43.810327 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-19 11:35:43.810332 | orchestrator | Friday 19 September 2025 11:32:53 +0000 (0:00:00.339) 0:08:01.823 ****** 2025-09-19 11:35:43.810337 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810342 | orchestrator | 2025-09-19 11:35:43.810347 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-19 11:35:43.810351 | orchestrator | Friday 19 September 2025 11:32:53 +0000 (0:00:00.276) 0:08:02.099 ****** 2025-09-19 11:35:43.810356 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810361 | orchestrator | 2025-09-19 11:35:43.810366 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-19 11:35:43.810370 | orchestrator | Friday 19 September 2025 11:32:53 +0000 (0:00:00.239) 0:08:02.338 ****** 2025-09-19 11:35:43.810375 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810380 | orchestrator | 2025-09-19 11:35:43.810384 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-19 11:35:43.810389 | orchestrator | Friday 19 September 2025 11:32:53 +0000 (0:00:00.133) 0:08:02.472 ****** 2025-09-19 11:35:43.810394 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810398 | orchestrator | 2025-09-19 11:35:43.810403 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-19 11:35:43.810408 | orchestrator | Friday 19 September 2025 11:32:54 +0000 (0:00:00.249) 0:08:02.722 ****** 2025-09-19 11:35:43.810413 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810417 | orchestrator | 2025-09-19 11:35:43.810422 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-19 11:35:43.810427 | orchestrator | Friday 19 September 2025 11:32:54 +0000 (0:00:00.900) 0:08:03.623 ****** 2025-09-19 11:35:43.810434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:35:43.810439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:35:43.810444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:35:43.810448 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810453 | orchestrator | 2025-09-19 11:35:43.810458 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-19 11:35:43.810463 | orchestrator | Friday 19 September 2025 11:32:55 +0000 (0:00:00.502) 0:08:04.125 ****** 2025-09-19 11:35:43.810467 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810472 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.810477 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.810481 | orchestrator | 2025-09-19 11:35:43.810486 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-19 11:35:43.810491 | orchestrator | Friday 19 September 2025 11:32:55 +0000 (0:00:00.375) 0:08:04.501 ****** 2025-09-19 11:35:43.810496 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810500 | orchestrator | 2025-09-19 11:35:43.810505 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-19 11:35:43.810510 | orchestrator | Friday 19 September 2025 11:32:56 +0000 (0:00:00.243) 0:08:04.744 ****** 2025-09-19 11:35:43.810515 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810519 | orchestrator | 2025-09-19 11:35:43.810524 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-19 11:35:43.810529 | orchestrator | 2025-09-19 11:35:43.810534 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:35:43.810538 | orchestrator | Friday 19 September 2025 11:32:56 +0000 (0:00:00.666) 0:08:05.410 ****** 2025-09-19 11:35:43.810543 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.810555 | orchestrator | 2025-09-19 11:35:43.810563 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:35:43.810571 | orchestrator | Friday 19 September 2025 11:32:57 +0000 (0:00:01.103) 0:08:06.514 ****** 2025-09-19 11:35:43.810578 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.810585 | orchestrator | 2025-09-19 11:35:43.810592 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:35:43.810600 | orchestrator | Friday 19 September 2025 11:32:58 +0000 (0:00:01.048) 0:08:07.562 ****** 2025-09-19 11:35:43.810609 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.810617 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810624 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.810632 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.810641 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.810679 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.810686 | orchestrator | 2025-09-19 11:35:43.810690 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:35:43.810695 | orchestrator | Friday 19 September 2025 11:32:59 +0000 (0:00:00.852) 0:08:08.415 ****** 2025-09-19 11:35:43.810700 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.810705 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.810709 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.810714 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.810719 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.810724 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.810728 | orchestrator | 2025-09-19 11:35:43.810733 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:35:43.810738 | orchestrator | Friday 19 September 2025 11:33:00 +0000 (0:00:00.945) 0:08:09.360 ****** 2025-09-19 11:35:43.810742 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.810747 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.810752 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.810757 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.810762 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.810766 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.810771 | orchestrator | 2025-09-19 11:35:43.810776 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:35:43.810784 | orchestrator | Friday 19 September 2025 11:33:01 +0000 (0:00:01.156) 0:08:10.516 ****** 2025-09-19 11:35:43.810789 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.810794 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.810798 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.810803 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.810808 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.810813 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.810817 | orchestrator | 2025-09-19 11:35:43.810822 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:35:43.810827 | orchestrator | Friday 19 September 2025 11:33:02 +0000 (0:00:00.909) 0:08:11.426 ****** 2025-09-19 11:35:43.810832 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.810836 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810841 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.810846 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.810851 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.810855 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.810860 | orchestrator | 2025-09-19 11:35:43.810865 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:35:43.810870 | orchestrator | Friday 19 September 2025 11:33:03 +0000 (0:00:00.979) 0:08:12.405 ****** 2025-09-19 11:35:43.810874 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.810879 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.810888 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.810893 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810897 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.810902 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.810907 | orchestrator | 2025-09-19 11:35:43.810912 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:35:43.810916 | orchestrator | Friday 19 September 2025 11:33:04 +0000 (0:00:00.535) 0:08:12.940 ****** 2025-09-19 11:35:43.810924 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.810929 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.810934 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.810939 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.810944 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.810949 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.810953 | orchestrator | 2025-09-19 11:35:43.810958 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:35:43.810963 | orchestrator | Friday 19 September 2025 11:33:05 +0000 (0:00:00.713) 0:08:13.653 ****** 2025-09-19 11:35:43.810968 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.810972 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.810977 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.810982 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.810987 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.810991 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.810996 | orchestrator | 2025-09-19 11:35:43.811001 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:35:43.811006 | orchestrator | Friday 19 September 2025 11:33:06 +0000 (0:00:01.003) 0:08:14.657 ****** 2025-09-19 11:35:43.811010 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.811015 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.811020 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.811024 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.811029 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.811034 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.811038 | orchestrator | 2025-09-19 11:35:43.811043 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:35:43.811048 | orchestrator | Friday 19 September 2025 11:33:07 +0000 (0:00:01.095) 0:08:15.753 ****** 2025-09-19 11:35:43.811053 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.811058 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.811062 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.811067 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.811072 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.811077 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.811081 | orchestrator | 2025-09-19 11:35:43.811086 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:35:43.811091 | orchestrator | Friday 19 September 2025 11:33:07 +0000 (0:00:00.544) 0:08:16.297 ****** 2025-09-19 11:35:43.811096 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.811100 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.811105 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.811110 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.811114 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.811119 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.811124 | orchestrator | 2025-09-19 11:35:43.811129 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:35:43.811134 | orchestrator | Friday 19 September 2025 11:33:08 +0000 (0:00:00.693) 0:08:16.991 ****** 2025-09-19 11:35:43.811138 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.811143 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.811148 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.811153 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.811157 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.811162 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.811167 | orchestrator | 2025-09-19 11:35:43.811175 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:35:43.811180 | orchestrator | Friday 19 September 2025 11:33:08 +0000 (0:00:00.540) 0:08:17.532 ****** 2025-09-19 11:35:43.811185 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.811190 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.811195 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.811199 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.811204 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.811209 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.811214 | orchestrator | 2025-09-19 11:35:43.811218 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:35:43.811223 | orchestrator | Friday 19 September 2025 11:33:09 +0000 (0:00:00.735) 0:08:18.268 ****** 2025-09-19 11:35:43.811227 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.811232 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.811236 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.811241 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.811245 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.811250 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.811254 | orchestrator | 2025-09-19 11:35:43.811259 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:35:43.811266 | orchestrator | Friday 19 September 2025 11:33:10 +0000 (0:00:00.582) 0:08:18.850 ****** 2025-09-19 11:35:43.811271 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.811275 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.811280 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.811285 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.811289 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.811294 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.811298 | orchestrator | 2025-09-19 11:35:43.811303 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:35:43.811307 | orchestrator | Friday 19 September 2025 11:33:10 +0000 (0:00:00.688) 0:08:19.539 ****** 2025-09-19 11:35:43.811312 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:35:43.811316 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:35:43.811321 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:35:43.811325 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.811330 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.811334 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.811339 | orchestrator | 2025-09-19 11:35:43.811343 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:35:43.811348 | orchestrator | Friday 19 September 2025 11:33:11 +0000 (0:00:00.501) 0:08:20.041 ****** 2025-09-19 11:35:43.811352 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.811356 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.811361 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.811365 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.811370 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.811375 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.811379 | orchestrator | 2025-09-19 11:35:43.811384 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:35:43.811390 | orchestrator | Friday 19 September 2025 11:33:12 +0000 (0:00:00.690) 0:08:20.732 ****** 2025-09-19 11:35:43.811395 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.811400 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.811404 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.811408 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.811413 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.811417 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.811422 | orchestrator | 2025-09-19 11:35:43.811426 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:35:43.811431 | orchestrator | Friday 19 September 2025 11:33:12 +0000 (0:00:00.575) 0:08:21.307 ****** 2025-09-19 11:35:43.811436 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.811443 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.811448 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.811452 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.811457 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.811461 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.811466 | orchestrator | 2025-09-19 11:35:43.811470 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-19 11:35:43.811475 | orchestrator | Friday 19 September 2025 11:33:13 +0000 (0:00:01.161) 0:08:22.468 ****** 2025-09-19 11:35:43.811479 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.811484 | orchestrator | 2025-09-19 11:35:43.811488 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-19 11:35:43.811493 | orchestrator | Friday 19 September 2025 11:33:17 +0000 (0:00:04.000) 0:08:26.469 ****** 2025-09-19 11:35:43.811497 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.811502 | orchestrator | 2025-09-19 11:35:43.811506 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-19 11:35:43.811511 | orchestrator | Friday 19 September 2025 11:33:20 +0000 (0:00:02.426) 0:08:28.895 ****** 2025-09-19 11:35:43.811515 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.811520 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.811524 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.811529 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.811533 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.811538 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.811542 | orchestrator | 2025-09-19 11:35:43.811547 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-19 11:35:43.811551 | orchestrator | Friday 19 September 2025 11:33:21 +0000 (0:00:01.440) 0:08:30.336 ****** 2025-09-19 11:35:43.811556 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.811560 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.811565 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.811569 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.811574 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.811578 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.811583 | orchestrator | 2025-09-19 11:35:43.811587 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-19 11:35:43.811593 | orchestrator | Friday 19 September 2025 11:33:22 +0000 (0:00:01.102) 0:08:31.438 ****** 2025-09-19 11:35:43.811601 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.811609 | orchestrator | 2025-09-19 11:35:43.811617 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-19 11:35:43.811626 | orchestrator | Friday 19 September 2025 11:33:24 +0000 (0:00:01.246) 0:08:32.685 ****** 2025-09-19 11:35:43.811633 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.811640 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.811660 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.811668 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.811675 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.811682 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.811689 | orchestrator | 2025-09-19 11:35:43.811696 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-19 11:35:43.811703 | orchestrator | Friday 19 September 2025 11:33:25 +0000 (0:00:01.563) 0:08:34.248 ****** 2025-09-19 11:35:43.811707 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.811712 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.811716 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.811720 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.811725 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.811729 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.811733 | orchestrator | 2025-09-19 11:35:43.811738 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-19 11:35:43.811750 | orchestrator | Friday 19 September 2025 11:33:29 +0000 (0:00:03.725) 0:08:37.974 ****** 2025-09-19 11:35:43.811754 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.811759 | orchestrator | 2025-09-19 11:35:43.811764 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-19 11:35:43.811768 | orchestrator | Friday 19 September 2025 11:33:30 +0000 (0:00:01.474) 0:08:39.448 ****** 2025-09-19 11:35:43.811772 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.811777 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.811781 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.811786 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.811790 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.811794 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.811799 | orchestrator | 2025-09-19 11:35:43.811803 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-19 11:35:43.811808 | orchestrator | Friday 19 September 2025 11:33:31 +0000 (0:00:00.639) 0:08:40.088 ****** 2025-09-19 11:35:43.811812 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:35:43.811817 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.811821 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.811826 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:35:43.811830 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.811834 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:35:43.811839 | orchestrator | 2025-09-19 11:35:43.811843 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-19 11:35:43.811848 | orchestrator | Friday 19 September 2025 11:33:33 +0000 (0:00:02.497) 0:08:42.585 ****** 2025-09-19 11:35:43.811852 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:35:43.811860 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:35:43.811864 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:35:43.811869 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.811873 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.811878 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.811882 | orchestrator | 2025-09-19 11:35:43.811886 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-19 11:35:43.811891 | orchestrator | 2025-09-19 11:35:43.811895 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:35:43.811900 | orchestrator | Friday 19 September 2025 11:33:34 +0000 (0:00:00.878) 0:08:43.463 ****** 2025-09-19 11:35:43.811904 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.811909 | orchestrator | 2025-09-19 11:35:43.811913 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:35:43.811918 | orchestrator | Friday 19 September 2025 11:33:35 +0000 (0:00:00.774) 0:08:44.238 ****** 2025-09-19 11:35:43.811922 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.811926 | orchestrator | 2025-09-19 11:35:43.811931 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:35:43.811935 | orchestrator | Friday 19 September 2025 11:33:36 +0000 (0:00:00.514) 0:08:44.753 ****** 2025-09-19 11:35:43.811940 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.811944 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.811948 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.811953 | orchestrator | 2025-09-19 11:35:43.811957 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:35:43.811962 | orchestrator | Friday 19 September 2025 11:33:36 +0000 (0:00:00.307) 0:08:45.060 ****** 2025-09-19 11:35:43.811966 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.811971 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.811975 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.811979 | orchestrator | 2025-09-19 11:35:43.811984 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:35:43.811992 | orchestrator | Friday 19 September 2025 11:33:37 +0000 (0:00:01.072) 0:08:46.133 ****** 2025-09-19 11:35:43.811997 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.812001 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.812005 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.812010 | orchestrator | 2025-09-19 11:35:43.812014 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:35:43.812019 | orchestrator | Friday 19 September 2025 11:33:38 +0000 (0:00:00.760) 0:08:46.893 ****** 2025-09-19 11:35:43.812023 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.812027 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.812032 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.812036 | orchestrator | 2025-09-19 11:35:43.812041 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:35:43.812045 | orchestrator | Friday 19 September 2025 11:33:38 +0000 (0:00:00.698) 0:08:47.592 ****** 2025-09-19 11:35:43.812049 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.812054 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.812058 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.812063 | orchestrator | 2025-09-19 11:35:43.812067 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:35:43.812072 | orchestrator | Friday 19 September 2025 11:33:39 +0000 (0:00:00.308) 0:08:47.901 ****** 2025-09-19 11:35:43.812076 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.812080 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.812085 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.812089 | orchestrator | 2025-09-19 11:35:43.812094 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:35:43.812098 | orchestrator | Friday 19 September 2025 11:33:39 +0000 (0:00:00.644) 0:08:48.545 ****** 2025-09-19 11:35:43.812103 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.812107 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.812111 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.812116 | orchestrator | 2025-09-19 11:35:43.812120 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:35:43.812125 | orchestrator | Friday 19 September 2025 11:33:40 +0000 (0:00:00.349) 0:08:48.895 ****** 2025-09-19 11:35:43.812129 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.812136 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.812140 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.812145 | orchestrator | 2025-09-19 11:35:43.812149 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:35:43.812154 | orchestrator | Friday 19 September 2025 11:33:41 +0000 (0:00:00.807) 0:08:49.703 ****** 2025-09-19 11:35:43.812158 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.812162 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.812167 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.812171 | orchestrator | 2025-09-19 11:35:43.812176 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:35:43.812180 | orchestrator | Friday 19 September 2025 11:33:41 +0000 (0:00:00.731) 0:08:50.434 ****** 2025-09-19 11:35:43.812185 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.812189 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.812194 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.812198 | orchestrator | 2025-09-19 11:35:43.812203 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:35:43.812207 | orchestrator | Friday 19 September 2025 11:33:42 +0000 (0:00:00.605) 0:08:51.039 ****** 2025-09-19 11:35:43.812212 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.812216 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.812220 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.812225 | orchestrator | 2025-09-19 11:35:43.812229 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:35:43.812234 | orchestrator | Friday 19 September 2025 11:33:42 +0000 (0:00:00.324) 0:08:51.364 ****** 2025-09-19 11:35:43.812243 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.812247 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.812252 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.812256 | orchestrator | 2025-09-19 11:35:43.812263 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:35:43.812268 | orchestrator | Friday 19 September 2025 11:33:43 +0000 (0:00:00.362) 0:08:51.727 ****** 2025-09-19 11:35:43.812273 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.812277 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.812281 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.812286 | orchestrator | 2025-09-19 11:35:43.812290 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:35:43.812295 | orchestrator | Friday 19 September 2025 11:33:43 +0000 (0:00:00.338) 0:08:52.065 ****** 2025-09-19 11:35:43.812299 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.812304 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.812308 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.812312 | orchestrator | 2025-09-19 11:35:43.812317 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:35:43.812321 | orchestrator | Friday 19 September 2025 11:33:44 +0000 (0:00:00.702) 0:08:52.768 ****** 2025-09-19 11:35:43.812326 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.812330 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.812335 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.812339 | orchestrator | 2025-09-19 11:35:43.812344 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:35:43.812348 | orchestrator | Friday 19 September 2025 11:33:44 +0000 (0:00:00.327) 0:08:53.095 ****** 2025-09-19 11:35:43.812352 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.812357 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.812361 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.812366 | orchestrator | 2025-09-19 11:35:43.812370 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:35:43.812375 | orchestrator | Friday 19 September 2025 11:33:44 +0000 (0:00:00.299) 0:08:53.395 ****** 2025-09-19 11:35:43.812379 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.812383 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.812388 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.812392 | orchestrator | 2025-09-19 11:35:43.812397 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:35:43.812401 | orchestrator | Friday 19 September 2025 11:33:45 +0000 (0:00:00.313) 0:08:53.709 ****** 2025-09-19 11:35:43.812406 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.812410 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.812414 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.812419 | orchestrator | 2025-09-19 11:35:43.812423 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:35:43.812428 | orchestrator | Friday 19 September 2025 11:33:45 +0000 (0:00:00.651) 0:08:54.360 ****** 2025-09-19 11:35:43.812432 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.812436 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.812441 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.812445 | orchestrator | 2025-09-19 11:35:43.812450 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-19 11:35:43.812454 | orchestrator | Friday 19 September 2025 11:33:46 +0000 (0:00:00.549) 0:08:54.910 ****** 2025-09-19 11:35:43.812459 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.812463 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.812468 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-19 11:35:43.812472 | orchestrator | 2025-09-19 11:35:43.812477 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-19 11:35:43.812481 | orchestrator | Friday 19 September 2025 11:33:46 +0000 (0:00:00.647) 0:08:55.557 ****** 2025-09-19 11:35:43.812490 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:35:43.812494 | orchestrator | 2025-09-19 11:35:43.812499 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-19 11:35:43.812503 | orchestrator | Friday 19 September 2025 11:33:49 +0000 (0:00:02.236) 0:08:57.794 ****** 2025-09-19 11:35:43.812508 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-19 11:35:43.812514 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.812518 | orchestrator | 2025-09-19 11:35:43.812523 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-19 11:35:43.812530 | orchestrator | Friday 19 September 2025 11:33:49 +0000 (0:00:00.247) 0:08:58.042 ****** 2025-09-19 11:35:43.812536 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:35:43.812545 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:35:43.812550 | orchestrator | 2025-09-19 11:35:43.812555 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-19 11:35:43.812559 | orchestrator | Friday 19 September 2025 11:33:57 +0000 (0:00:08.495) 0:09:06.537 ****** 2025-09-19 11:35:43.812564 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:35:43.812568 | orchestrator | 2025-09-19 11:35:43.812573 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-19 11:35:43.812577 | orchestrator | Friday 19 September 2025 11:34:01 +0000 (0:00:03.568) 0:09:10.105 ****** 2025-09-19 11:35:43.812582 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.812586 | orchestrator | 2025-09-19 11:35:43.812593 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-19 11:35:43.812598 | orchestrator | Friday 19 September 2025 11:34:02 +0000 (0:00:00.563) 0:09:10.668 ****** 2025-09-19 11:35:43.812602 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 11:35:43.812607 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-19 11:35:43.812613 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 11:35:43.812621 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 11:35:43.812629 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-19 11:35:43.812637 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-19 11:35:43.812645 | orchestrator | 2025-09-19 11:35:43.812666 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-19 11:35:43.812674 | orchestrator | Friday 19 September 2025 11:34:03 +0000 (0:00:01.445) 0:09:12.113 ****** 2025-09-19 11:35:43.812681 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:35:43.812686 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 11:35:43.812690 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:35:43.812695 | orchestrator | 2025-09-19 11:35:43.812699 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-19 11:35:43.812704 | orchestrator | Friday 19 September 2025 11:34:05 +0000 (0:00:02.206) 0:09:14.320 ****** 2025-09-19 11:35:43.812708 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:35:43.812712 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:35:43.812721 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 11:35:43.812725 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 11:35:43.812730 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.812734 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.812738 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:35:43.812743 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 11:35:43.812747 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.812752 | orchestrator | 2025-09-19 11:35:43.812756 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-19 11:35:43.812760 | orchestrator | Friday 19 September 2025 11:34:07 +0000 (0:00:01.378) 0:09:15.698 ****** 2025-09-19 11:35:43.812765 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.812769 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.812774 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.812778 | orchestrator | 2025-09-19 11:35:43.812782 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-19 11:35:43.812787 | orchestrator | Friday 19 September 2025 11:34:09 +0000 (0:00:02.800) 0:09:18.499 ****** 2025-09-19 11:35:43.812791 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.812796 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.812800 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.812804 | orchestrator | 2025-09-19 11:35:43.812809 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-19 11:35:43.812813 | orchestrator | Friday 19 September 2025 11:34:10 +0000 (0:00:00.642) 0:09:19.142 ****** 2025-09-19 11:35:43.812818 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.812822 | orchestrator | 2025-09-19 11:35:43.812827 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-19 11:35:43.812831 | orchestrator | Friday 19 September 2025 11:34:11 +0000 (0:00:00.536) 0:09:19.678 ****** 2025-09-19 11:35:43.812836 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.812840 | orchestrator | 2025-09-19 11:35:43.812844 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-19 11:35:43.812849 | orchestrator | Friday 19 September 2025 11:34:11 +0000 (0:00:00.814) 0:09:20.492 ****** 2025-09-19 11:35:43.812853 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.812858 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.812865 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.812870 | orchestrator | 2025-09-19 11:35:43.812874 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-19 11:35:43.812878 | orchestrator | Friday 19 September 2025 11:34:13 +0000 (0:00:01.590) 0:09:22.082 ****** 2025-09-19 11:35:43.812883 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.812887 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.812892 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.812896 | orchestrator | 2025-09-19 11:35:43.812901 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-19 11:35:43.812905 | orchestrator | Friday 19 September 2025 11:34:14 +0000 (0:00:01.342) 0:09:23.425 ****** 2025-09-19 11:35:43.812909 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.812914 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.812918 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.812922 | orchestrator | 2025-09-19 11:35:43.812927 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-19 11:35:43.812931 | orchestrator | Friday 19 September 2025 11:34:16 +0000 (0:00:02.020) 0:09:25.446 ****** 2025-09-19 11:35:43.812936 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.812940 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.812945 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.812949 | orchestrator | 2025-09-19 11:35:43.812957 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-19 11:35:43.812962 | orchestrator | Friday 19 September 2025 11:34:19 +0000 (0:00:02.467) 0:09:27.913 ****** 2025-09-19 11:35:43.812966 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.812971 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.812975 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.812980 | orchestrator | 2025-09-19 11:35:43.812987 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 11:35:43.812992 | orchestrator | Friday 19 September 2025 11:34:20 +0000 (0:00:01.338) 0:09:29.252 ****** 2025-09-19 11:35:43.812996 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.813001 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.813005 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.813010 | orchestrator | 2025-09-19 11:35:43.813014 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-19 11:35:43.813019 | orchestrator | Friday 19 September 2025 11:34:21 +0000 (0:00:01.016) 0:09:30.269 ****** 2025-09-19 11:35:43.813023 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.813028 | orchestrator | 2025-09-19 11:35:43.813032 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-19 11:35:43.813036 | orchestrator | Friday 19 September 2025 11:34:22 +0000 (0:00:00.550) 0:09:30.819 ****** 2025-09-19 11:35:43.813041 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.813045 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.813050 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.813054 | orchestrator | 2025-09-19 11:35:43.813059 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-19 11:35:43.813063 | orchestrator | Friday 19 September 2025 11:34:22 +0000 (0:00:00.328) 0:09:31.147 ****** 2025-09-19 11:35:43.813068 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.813072 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.813076 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.813081 | orchestrator | 2025-09-19 11:35:43.813085 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-19 11:35:43.813090 | orchestrator | Friday 19 September 2025 11:34:23 +0000 (0:00:01.194) 0:09:32.342 ****** 2025-09-19 11:35:43.813094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:35:43.813098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:35:43.813103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:35:43.813107 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.813112 | orchestrator | 2025-09-19 11:35:43.813116 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-19 11:35:43.813121 | orchestrator | Friday 19 September 2025 11:34:25 +0000 (0:00:01.421) 0:09:33.764 ****** 2025-09-19 11:35:43.813125 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.813129 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.813134 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.813138 | orchestrator | 2025-09-19 11:35:43.813143 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-19 11:35:43.813147 | orchestrator | 2025-09-19 11:35:43.813152 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 11:35:43.813156 | orchestrator | Friday 19 September 2025 11:34:25 +0000 (0:00:00.613) 0:09:34.377 ****** 2025-09-19 11:35:43.813161 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.813165 | orchestrator | 2025-09-19 11:35:43.813170 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 11:35:43.813174 | orchestrator | Friday 19 September 2025 11:34:26 +0000 (0:00:00.786) 0:09:35.163 ****** 2025-09-19 11:35:43.813179 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.813187 | orchestrator | 2025-09-19 11:35:43.813191 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 11:35:43.813196 | orchestrator | Friday 19 September 2025 11:34:27 +0000 (0:00:00.761) 0:09:35.925 ****** 2025-09-19 11:35:43.813200 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.813204 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.813209 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.813213 | orchestrator | 2025-09-19 11:35:43.813218 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 11:35:43.813222 | orchestrator | Friday 19 September 2025 11:34:27 +0000 (0:00:00.477) 0:09:36.403 ****** 2025-09-19 11:35:43.813226 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.813231 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.813235 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.813240 | orchestrator | 2025-09-19 11:35:43.813247 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 11:35:43.813251 | orchestrator | Friday 19 September 2025 11:34:29 +0000 (0:00:01.559) 0:09:37.962 ****** 2025-09-19 11:35:43.813256 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.813260 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.813264 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.813269 | orchestrator | 2025-09-19 11:35:43.813273 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 11:35:43.813278 | orchestrator | Friday 19 September 2025 11:34:30 +0000 (0:00:01.025) 0:09:38.988 ****** 2025-09-19 11:35:43.813282 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.813287 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.813291 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.813295 | orchestrator | 2025-09-19 11:35:43.813300 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 11:35:43.813304 | orchestrator | Friday 19 September 2025 11:34:31 +0000 (0:00:00.881) 0:09:39.870 ****** 2025-09-19 11:35:43.813309 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.813313 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.813318 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.813322 | orchestrator | 2025-09-19 11:35:43.813327 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 11:35:43.813331 | orchestrator | Friday 19 September 2025 11:34:31 +0000 (0:00:00.544) 0:09:40.415 ****** 2025-09-19 11:35:43.813335 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.813340 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.813344 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.813349 | orchestrator | 2025-09-19 11:35:43.813353 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 11:35:43.813360 | orchestrator | Friday 19 September 2025 11:34:32 +0000 (0:00:01.198) 0:09:41.613 ****** 2025-09-19 11:35:43.813364 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.813369 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.813373 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.813377 | orchestrator | 2025-09-19 11:35:43.813382 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 11:35:43.813386 | orchestrator | Friday 19 September 2025 11:34:33 +0000 (0:00:00.379) 0:09:41.993 ****** 2025-09-19 11:35:43.813391 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.813395 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.813400 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.813404 | orchestrator | 2025-09-19 11:35:43.813409 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 11:35:43.813413 | orchestrator | Friday 19 September 2025 11:34:34 +0000 (0:00:00.891) 0:09:42.884 ****** 2025-09-19 11:35:43.813418 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.813422 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.813427 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.813431 | orchestrator | 2025-09-19 11:35:43.813435 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 11:35:43.813446 | orchestrator | Friday 19 September 2025 11:34:34 +0000 (0:00:00.761) 0:09:43.645 ****** 2025-09-19 11:35:43.813450 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.813455 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.813459 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.813464 | orchestrator | 2025-09-19 11:35:43.813468 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 11:35:43.813473 | orchestrator | Friday 19 September 2025 11:34:35 +0000 (0:00:00.658) 0:09:44.304 ****** 2025-09-19 11:35:43.813477 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.813482 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.813486 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.813490 | orchestrator | 2025-09-19 11:35:43.813495 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 11:35:43.813499 | orchestrator | Friday 19 September 2025 11:34:35 +0000 (0:00:00.330) 0:09:44.635 ****** 2025-09-19 11:35:43.813504 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.813508 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.813513 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.813517 | orchestrator | 2025-09-19 11:35:43.813522 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 11:35:43.813526 | orchestrator | Friday 19 September 2025 11:34:36 +0000 (0:00:00.356) 0:09:44.992 ****** 2025-09-19 11:35:43.813531 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.813535 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.813539 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.813544 | orchestrator | 2025-09-19 11:35:43.813548 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 11:35:43.813553 | orchestrator | Friday 19 September 2025 11:34:36 +0000 (0:00:00.343) 0:09:45.335 ****** 2025-09-19 11:35:43.813557 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.813562 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.813566 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.813570 | orchestrator | 2025-09-19 11:35:43.813575 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 11:35:43.813579 | orchestrator | Friday 19 September 2025 11:34:37 +0000 (0:00:00.768) 0:09:46.104 ****** 2025-09-19 11:35:43.813584 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.813588 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.813593 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.813597 | orchestrator | 2025-09-19 11:35:43.813602 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 11:35:43.813606 | orchestrator | Friday 19 September 2025 11:34:37 +0000 (0:00:00.355) 0:09:46.460 ****** 2025-09-19 11:35:43.813611 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.813615 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.813619 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.813624 | orchestrator | 2025-09-19 11:35:43.813628 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 11:35:43.813635 | orchestrator | Friday 19 September 2025 11:34:38 +0000 (0:00:00.388) 0:09:46.849 ****** 2025-09-19 11:35:43.813643 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.813682 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.813690 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.813698 | orchestrator | 2025-09-19 11:35:43.813703 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 11:35:43.813712 | orchestrator | Friday 19 September 2025 11:34:38 +0000 (0:00:00.379) 0:09:47.228 ****** 2025-09-19 11:35:43.813716 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.813721 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.813725 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.813730 | orchestrator | 2025-09-19 11:35:43.813734 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 11:35:43.813739 | orchestrator | Friday 19 September 2025 11:34:39 +0000 (0:00:00.872) 0:09:48.101 ****** 2025-09-19 11:35:43.813748 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.813752 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.813756 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.813761 | orchestrator | 2025-09-19 11:35:43.813765 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-19 11:35:43.813770 | orchestrator | Friday 19 September 2025 11:34:40 +0000 (0:00:00.641) 0:09:48.743 ****** 2025-09-19 11:35:43.813774 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.813779 | orchestrator | 2025-09-19 11:35:43.813783 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-19 11:35:43.813788 | orchestrator | Friday 19 September 2025 11:34:40 +0000 (0:00:00.903) 0:09:49.647 ****** 2025-09-19 11:35:43.813792 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:35:43.813797 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 11:35:43.813801 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:35:43.813806 | orchestrator | 2025-09-19 11:35:43.813810 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-19 11:35:43.813818 | orchestrator | Friday 19 September 2025 11:34:43 +0000 (0:00:02.185) 0:09:51.832 ****** 2025-09-19 11:35:43.813823 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:35:43.813827 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 11:35:43.813832 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.813836 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:35:43.813841 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:35:43.813845 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 11:35:43.813850 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 11:35:43.813854 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.813859 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.813863 | orchestrator | 2025-09-19 11:35:43.813867 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-19 11:35:43.813872 | orchestrator | Friday 19 September 2025 11:34:44 +0000 (0:00:01.398) 0:09:53.230 ****** 2025-09-19 11:35:43.813876 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.813881 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.813885 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.813890 | orchestrator | 2025-09-19 11:35:43.813894 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-19 11:35:43.813899 | orchestrator | Friday 19 September 2025 11:34:44 +0000 (0:00:00.334) 0:09:53.565 ****** 2025-09-19 11:35:43.813903 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.813908 | orchestrator | 2025-09-19 11:35:43.813912 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-19 11:35:43.813916 | orchestrator | Friday 19 September 2025 11:34:45 +0000 (0:00:00.839) 0:09:54.405 ****** 2025-09-19 11:35:43.813921 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.813926 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.813930 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.813935 | orchestrator | 2025-09-19 11:35:43.813939 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-19 11:35:43.813944 | orchestrator | Friday 19 September 2025 11:34:46 +0000 (0:00:00.832) 0:09:55.237 ****** 2025-09-19 11:35:43.813948 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:35:43.813956 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 11:35:43.813961 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:35:43.813965 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 11:35:43.813970 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:35:43.813974 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 11:35:43.813979 | orchestrator | 2025-09-19 11:35:43.813983 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-19 11:35:43.813988 | orchestrator | Friday 19 September 2025 11:34:50 +0000 (0:00:04.345) 0:09:59.583 ****** 2025-09-19 11:35:43.813992 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:35:43.813997 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:35:43.814003 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:35:43.814008 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:35:43.814012 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:35:43.814038 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:35:43.814043 | orchestrator | 2025-09-19 11:35:43.814047 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-19 11:35:43.814052 | orchestrator | Friday 19 September 2025 11:34:53 +0000 (0:00:02.492) 0:10:02.075 ****** 2025-09-19 11:35:43.814056 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:35:43.814061 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.814065 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:35:43.814070 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.814074 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:35:43.814079 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.814083 | orchestrator | 2025-09-19 11:35:43.814088 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-19 11:35:43.814092 | orchestrator | Friday 19 September 2025 11:34:55 +0000 (0:00:01.644) 0:10:03.720 ****** 2025-09-19 11:35:43.814097 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-19 11:35:43.814101 | orchestrator | 2025-09-19 11:35:43.814106 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-19 11:35:43.814110 | orchestrator | Friday 19 September 2025 11:34:55 +0000 (0:00:00.250) 0:10:03.970 ****** 2025-09-19 11:35:43.814118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:35:43.814123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:35:43.814127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:35:43.814132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:35:43.814137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:35:43.814141 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.814146 | orchestrator | 2025-09-19 11:35:43.814150 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-19 11:35:43.814155 | orchestrator | Friday 19 September 2025 11:34:55 +0000 (0:00:00.604) 0:10:04.574 ****** 2025-09-19 11:35:43.814163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:35:43.814167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:35:43.814172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:35:43.814177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:35:43.814181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 11:35:43.814186 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.814190 | orchestrator | 2025-09-19 11:35:43.814195 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-19 11:35:43.814200 | orchestrator | Friday 19 September 2025 11:34:56 +0000 (0:00:00.590) 0:10:05.164 ****** 2025-09-19 11:35:43.814204 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 11:35:43.814209 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 11:35:43.814213 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 11:35:43.814218 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 11:35:43.814222 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 11:35:43.814227 | orchestrator | 2025-09-19 11:35:43.814231 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-19 11:35:43.814236 | orchestrator | Friday 19 September 2025 11:35:28 +0000 (0:00:31.832) 0:10:36.998 ****** 2025-09-19 11:35:43.814241 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.814245 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.814250 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.814254 | orchestrator | 2025-09-19 11:35:43.814259 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-19 11:35:43.814263 | orchestrator | Friday 19 September 2025 11:35:28 +0000 (0:00:00.386) 0:10:37.384 ****** 2025-09-19 11:35:43.814267 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.814271 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.814275 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.814280 | orchestrator | 2025-09-19 11:35:43.814284 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-19 11:35:43.814288 | orchestrator | Friday 19 September 2025 11:35:29 +0000 (0:00:01.217) 0:10:38.601 ****** 2025-09-19 11:35:43.814292 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.814296 | orchestrator | 2025-09-19 11:35:43.814300 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-19 11:35:43.814304 | orchestrator | Friday 19 September 2025 11:35:30 +0000 (0:00:00.602) 0:10:39.204 ****** 2025-09-19 11:35:43.814308 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.814312 | orchestrator | 2025-09-19 11:35:43.814316 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-19 11:35:43.814320 | orchestrator | Friday 19 September 2025 11:35:31 +0000 (0:00:00.516) 0:10:39.720 ****** 2025-09-19 11:35:43.814327 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.814331 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.814336 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.814340 | orchestrator | 2025-09-19 11:35:43.814344 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-19 11:35:43.814348 | orchestrator | Friday 19 September 2025 11:35:32 +0000 (0:00:01.601) 0:10:41.322 ****** 2025-09-19 11:35:43.814354 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.814358 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.814362 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.814366 | orchestrator | 2025-09-19 11:35:43.814370 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-19 11:35:43.814375 | orchestrator | Friday 19 September 2025 11:35:33 +0000 (0:00:01.215) 0:10:42.538 ****** 2025-09-19 11:35:43.814379 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:35:43.814383 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:35:43.814387 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:35:43.814391 | orchestrator | 2025-09-19 11:35:43.814395 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-19 11:35:43.814399 | orchestrator | Friday 19 September 2025 11:35:35 +0000 (0:00:01.868) 0:10:44.406 ****** 2025-09-19 11:35:43.814403 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.814407 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.814412 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 11:35:43.814416 | orchestrator | 2025-09-19 11:35:43.814420 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 11:35:43.814424 | orchestrator | Friday 19 September 2025 11:35:38 +0000 (0:00:02.844) 0:10:47.250 ****** 2025-09-19 11:35:43.814428 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.814432 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.814436 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.814440 | orchestrator | 2025-09-19 11:35:43.814444 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-19 11:35:43.814448 | orchestrator | Friday 19 September 2025 11:35:38 +0000 (0:00:00.350) 0:10:47.601 ****** 2025-09-19 11:35:43.814453 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:35:43.814457 | orchestrator | 2025-09-19 11:35:43.814461 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-19 11:35:43.814465 | orchestrator | Friday 19 September 2025 11:35:39 +0000 (0:00:00.819) 0:10:48.421 ****** 2025-09-19 11:35:43.814469 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.814473 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.814477 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.814481 | orchestrator | 2025-09-19 11:35:43.814485 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-19 11:35:43.814490 | orchestrator | Friday 19 September 2025 11:35:40 +0000 (0:00:00.345) 0:10:48.766 ****** 2025-09-19 11:35:43.814494 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.814498 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:35:43.814502 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:35:43.814506 | orchestrator | 2025-09-19 11:35:43.814510 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-19 11:35:43.814535 | orchestrator | Friday 19 September 2025 11:35:40 +0000 (0:00:00.383) 0:10:49.150 ****** 2025-09-19 11:35:43.814539 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:35:43.814543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:35:43.814547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:35:43.814554 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:35:43.814559 | orchestrator | 2025-09-19 11:35:43.814563 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-19 11:35:43.814567 | orchestrator | Friday 19 September 2025 11:35:41 +0000 (0:00:00.940) 0:10:50.091 ****** 2025-09-19 11:35:43.814571 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:35:43.814575 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:35:43.814579 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:35:43.814583 | orchestrator | 2025-09-19 11:35:43.814587 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:35:43.814594 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-09-19 11:35:43.814598 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-19 11:35:43.814602 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-19 11:35:43.814606 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-09-19 11:35:43.814610 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-19 11:35:43.814614 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-19 11:35:43.814618 | orchestrator | 2025-09-19 11:35:43.814622 | orchestrator | 2025-09-19 11:35:43.814627 | orchestrator | 2025-09-19 11:35:43.814631 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:35:43.814635 | orchestrator | Friday 19 September 2025 11:35:41 +0000 (0:00:00.274) 0:10:50.365 ****** 2025-09-19 11:35:43.814639 | orchestrator | =============================================================================== 2025-09-19 11:35:43.814643 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 44.56s 2025-09-19 11:35:43.814664 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.30s 2025-09-19 11:35:43.814672 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.83s 2025-09-19 11:35:43.814680 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.18s 2025-09-19 11:35:43.814686 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.80s 2025-09-19 11:35:43.814693 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.51s 2025-09-19 11:35:43.814699 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 11.00s 2025-09-19 11:35:43.814705 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.09s 2025-09-19 11:35:43.814713 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.50s 2025-09-19 11:35:43.814717 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.47s 2025-09-19 11:35:43.814721 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.44s 2025-09-19 11:35:43.814725 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.24s 2025-09-19 11:35:43.814729 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.45s 2025-09-19 11:35:43.814733 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.35s 2025-09-19 11:35:43.814737 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.00s 2025-09-19 11:35:43.814741 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.73s 2025-09-19 11:35:43.814745 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.57s 2025-09-19 11:35:43.814753 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.55s 2025-09-19 11:35:43.814757 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.39s 2025-09-19 11:35:43.814761 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.30s 2025-09-19 11:35:43.814765 | orchestrator | 2025-09-19 11:35:43 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:35:43.814769 | orchestrator | 2025-09-19 11:35:43 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:43.814773 | orchestrator | 2025-09-19 11:35:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:46.846922 | orchestrator | 2025-09-19 11:35:46 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:46.849252 | orchestrator | 2025-09-19 11:35:46 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:35:46.851559 | orchestrator | 2025-09-19 11:35:46 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:46.852328 | orchestrator | 2025-09-19 11:35:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:49.894083 | orchestrator | 2025-09-19 11:35:49 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:49.894886 | orchestrator | 2025-09-19 11:35:49 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:35:49.895821 | orchestrator | 2025-09-19 11:35:49 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:49.895853 | orchestrator | 2025-09-19 11:35:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:52.937544 | orchestrator | 2025-09-19 11:35:52 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:52.939209 | orchestrator | 2025-09-19 11:35:52 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:35:52.942262 | orchestrator | 2025-09-19 11:35:52 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:52.942299 | orchestrator | 2025-09-19 11:35:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:55.992997 | orchestrator | 2025-09-19 11:35:55 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:55.994465 | orchestrator | 2025-09-19 11:35:55 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:35:55.996236 | orchestrator | 2025-09-19 11:35:55 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:55.996271 | orchestrator | 2025-09-19 11:35:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:35:59.061841 | orchestrator | 2025-09-19 11:35:59 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:35:59.063736 | orchestrator | 2025-09-19 11:35:59 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:35:59.067671 | orchestrator | 2025-09-19 11:35:59 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:35:59.068199 | orchestrator | 2025-09-19 11:35:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:02.108109 | orchestrator | 2025-09-19 11:36:02 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:02.109313 | orchestrator | 2025-09-19 11:36:02 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:02.110588 | orchestrator | 2025-09-19 11:36:02 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:02.110698 | orchestrator | 2025-09-19 11:36:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:05.163298 | orchestrator | 2025-09-19 11:36:05 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:05.166830 | orchestrator | 2025-09-19 11:36:05 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:05.169636 | orchestrator | 2025-09-19 11:36:05 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:05.170707 | orchestrator | 2025-09-19 11:36:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:08.272549 | orchestrator | 2025-09-19 11:36:08 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:08.282342 | orchestrator | 2025-09-19 11:36:08 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:08.284662 | orchestrator | 2025-09-19 11:36:08 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:08.284689 | orchestrator | 2025-09-19 11:36:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:11.341344 | orchestrator | 2025-09-19 11:36:11 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:11.343047 | orchestrator | 2025-09-19 11:36:11 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:11.345970 | orchestrator | 2025-09-19 11:36:11 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:11.346004 | orchestrator | 2025-09-19 11:36:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:14.389975 | orchestrator | 2025-09-19 11:36:14 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:14.390765 | orchestrator | 2025-09-19 11:36:14 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:14.391570 | orchestrator | 2025-09-19 11:36:14 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:14.391778 | orchestrator | 2025-09-19 11:36:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:17.438535 | orchestrator | 2025-09-19 11:36:17 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:17.440901 | orchestrator | 2025-09-19 11:36:17 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:17.443443 | orchestrator | 2025-09-19 11:36:17 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:17.443535 | orchestrator | 2025-09-19 11:36:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:20.483977 | orchestrator | 2025-09-19 11:36:20 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:20.485088 | orchestrator | 2025-09-19 11:36:20 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:20.486445 | orchestrator | 2025-09-19 11:36:20 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:20.486475 | orchestrator | 2025-09-19 11:36:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:23.527245 | orchestrator | 2025-09-19 11:36:23 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:23.528369 | orchestrator | 2025-09-19 11:36:23 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:23.529764 | orchestrator | 2025-09-19 11:36:23 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:23.529783 | orchestrator | 2025-09-19 11:36:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:26.570472 | orchestrator | 2025-09-19 11:36:26 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:26.571006 | orchestrator | 2025-09-19 11:36:26 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:26.572785 | orchestrator | 2025-09-19 11:36:26 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:26.572880 | orchestrator | 2025-09-19 11:36:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:29.617515 | orchestrator | 2025-09-19 11:36:29 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:29.618974 | orchestrator | 2025-09-19 11:36:29 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:29.621373 | orchestrator | 2025-09-19 11:36:29 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:29.621440 | orchestrator | 2025-09-19 11:36:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:32.669037 | orchestrator | 2025-09-19 11:36:32 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:32.671733 | orchestrator | 2025-09-19 11:36:32 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:32.674688 | orchestrator | 2025-09-19 11:36:32 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:32.674745 | orchestrator | 2025-09-19 11:36:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:35.719563 | orchestrator | 2025-09-19 11:36:35 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:35.720875 | orchestrator | 2025-09-19 11:36:35 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:35.723731 | orchestrator | 2025-09-19 11:36:35 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:35.723769 | orchestrator | 2025-09-19 11:36:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:38.767433 | orchestrator | 2025-09-19 11:36:38 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:38.769724 | orchestrator | 2025-09-19 11:36:38 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:38.772084 | orchestrator | 2025-09-19 11:36:38 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:38.772124 | orchestrator | 2025-09-19 11:36:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:41.816312 | orchestrator | 2025-09-19 11:36:41 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:41.818833 | orchestrator | 2025-09-19 11:36:41 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:41.821225 | orchestrator | 2025-09-19 11:36:41 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:41.821273 | orchestrator | 2025-09-19 11:36:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:44.873780 | orchestrator | 2025-09-19 11:36:44 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:44.875785 | orchestrator | 2025-09-19 11:36:44 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:44.878006 | orchestrator | 2025-09-19 11:36:44 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:44.878102 | orchestrator | 2025-09-19 11:36:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:47.926438 | orchestrator | 2025-09-19 11:36:47 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state STARTED 2025-09-19 11:36:47.926838 | orchestrator | 2025-09-19 11:36:47 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:47.927880 | orchestrator | 2025-09-19 11:36:47 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:47.927898 | orchestrator | 2025-09-19 11:36:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:50.978886 | orchestrator | 2025-09-19 11:36:50 | INFO  | Task 86d506b2-55c6-4ce4-b83d-bc5d83ecf478 is in state SUCCESS 2025-09-19 11:36:50.980101 | orchestrator | 2025-09-19 11:36:50.980123 | orchestrator | 2025-09-19 11:36:50.980131 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:36:50.980138 | orchestrator | 2025-09-19 11:36:50.980144 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:36:50.980151 | orchestrator | Friday 19 September 2025 11:33:59 +0000 (0:00:00.261) 0:00:00.261 ****** 2025-09-19 11:36:50.980158 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:36:50.980166 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:36:50.980173 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:36:50.980181 | orchestrator | 2025-09-19 11:36:50.980189 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:36:50.980196 | orchestrator | Friday 19 September 2025 11:33:59 +0000 (0:00:00.301) 0:00:00.562 ****** 2025-09-19 11:36:50.980204 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-19 11:36:50.980211 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-19 11:36:50.980221 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-19 11:36:50.980228 | orchestrator | 2025-09-19 11:36:50.980236 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-19 11:36:50.980246 | orchestrator | 2025-09-19 11:36:50.980256 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 11:36:50.980264 | orchestrator | Friday 19 September 2025 11:34:00 +0000 (0:00:00.436) 0:00:00.999 ****** 2025-09-19 11:36:50.980270 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:36:50.980277 | orchestrator | 2025-09-19 11:36:50.980283 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-19 11:36:50.980290 | orchestrator | Friday 19 September 2025 11:34:00 +0000 (0:00:00.568) 0:00:01.568 ****** 2025-09-19 11:36:50.980297 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 11:36:50.980303 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 11:36:50.980310 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 11:36:50.980316 | orchestrator | 2025-09-19 11:36:50.980323 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-19 11:36:50.980330 | orchestrator | Friday 19 September 2025 11:34:02 +0000 (0:00:01.650) 0:00:03.218 ****** 2025-09-19 11:36:50.980340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:36:50.980349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:36:50.980386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:36:50.980396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:36:50.980404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:36:50.980412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:36:50.980424 | orchestrator | 2025-09-19 11:36:50.980431 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 11:36:50.980437 | orchestrator | Friday 19 September 2025 11:34:04 +0000 (0:00:02.050) 0:00:05.269 ****** 2025-09-19 11:36:50.980443 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:36:50.980450 | orchestrator | 2025-09-19 11:36:50.980460 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-19 11:36:50.980467 | orchestrator | Friday 19 September 2025 11:34:04 +0000 (0:00:00.571) 0:00:05.841 ****** 2025-09-19 11:36:50.980478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:36:50.980486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:36:50.980493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:36:50.980504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:36:50.980518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:36:50.980526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:36:50.980533 | orchestrator | 2025-09-19 11:36:50.980540 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-19 11:36:50.980576 | orchestrator | Friday 19 September 2025 11:34:08 +0000 (0:00:03.147) 0:00:08.989 ****** 2025-09-19 11:36:50.980584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:36:50.980597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:36:50.980605 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:36:50.980620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:36:50.980628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:36:50.980635 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:36:50.980643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:36:50.980654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:36:50.980661 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:36:50.980668 | orchestrator | 2025-09-19 11:36:50.980675 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-19 11:36:50.980681 | orchestrator | Friday 19 September 2025 11:34:09 +0000 (0:00:01.051) 0:00:10.040 ****** 2025-09-19 11:36:50.980696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:36:50.980703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:36:50.980786 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:36:50.980800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:36:50.980815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:36:50.980823 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:36:50.980839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 11:36:50.980848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 11:36:50.980855 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:36:50.980863 | orchestrator | 2025-09-19 11:36:50.980870 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-19 11:36:50.980886 | orchestrator | Friday 19 September 2025 11:34:10 +0000 (0:00:00.986) 0:00:11.027 ****** 2025-09-19 11:36:50.980893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:36:50.980900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:36:50.980909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:36:50.980917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:36:50.980922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:36:50.980932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:36:50.980936 | orchestrator | 2025-09-19 11:36:50.980940 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-19 11:36:50.980945 | orchestrator | Friday 19 September 2025 11:34:12 +0000 (0:00:02.668) 0:00:13.695 ****** 2025-09-19 11:36:50.980949 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:36:50.980953 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:36:50.980957 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:36:50.980961 | orchestrator | 2025-09-19 11:36:50.980965 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-19 11:36:50.980969 | orchestrator | Friday 19 September 2025 11:34:16 +0000 (0:00:03.340) 0:00:17.035 ****** 2025-09-19 11:36:50.980973 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:36:50.980977 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:36:50.980982 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:36:50.980989 | orchestrator | 2025-09-19 11:36:50.980996 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-19 11:36:50.981005 | orchestrator | Friday 19 September 2025 11:34:18 +0000 (0:00:02.107) 0:00:19.143 ****** 2025-09-19 11:36:50.981019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:36:50.981027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:36:50.981038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 11:36:50.981046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:36:50.981061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:36:50.981069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 11:36:50.981081 | orchestrator | 2025-09-19 11:36:50.981088 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 11:36:50.981095 | orchestrator | Friday 19 September 2025 11:34:20 +0000 (0:00:02.083) 0:00:21.227 ****** 2025-09-19 11:36:50.981101 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:36:50.981107 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:36:50.981114 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:36:50.981120 | orchestrator | 2025-09-19 11:36:50.981126 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 11:36:50.981133 | orchestrator | Friday 19 September 2025 11:34:20 +0000 (0:00:00.364) 0:00:21.591 ****** 2025-09-19 11:36:50.981139 | orchestrator | 2025-09-19 11:36:50.981145 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 11:36:50.981152 | orchestrator | Friday 19 September 2025 11:34:20 +0000 (0:00:00.079) 0:00:21.671 ****** 2025-09-19 11:36:50.981159 | orchestrator | 2025-09-19 11:36:50.981165 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 11:36:50.981171 | orchestrator | Friday 19 September 2025 11:34:20 +0000 (0:00:00.081) 0:00:21.752 ****** 2025-09-19 11:36:50.981178 | orchestrator | 2025-09-19 11:36:50.981184 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-19 11:36:50.981191 | orchestrator | Friday 19 September 2025 11:34:21 +0000 (0:00:00.285) 0:00:22.037 ****** 2025-09-19 11:36:50.981197 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:36:50.981204 | orchestrator | 2025-09-19 11:36:50.981211 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-19 11:36:50.981218 | orchestrator | Friday 19 September 2025 11:34:21 +0000 (0:00:00.226) 0:00:22.264 ****** 2025-09-19 11:36:50.981225 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:36:50.981231 | orchestrator | 2025-09-19 11:36:50.981238 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-19 11:36:50.981244 | orchestrator | Friday 19 September 2025 11:34:21 +0000 (0:00:00.274) 0:00:22.539 ****** 2025-09-19 11:36:50.981251 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:36:50.981257 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:36:50.981264 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:36:50.981271 | orchestrator | 2025-09-19 11:36:50.981277 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-19 11:36:50.981284 | orchestrator | Friday 19 September 2025 11:35:21 +0000 (0:00:59.830) 0:01:22.370 ****** 2025-09-19 11:36:50.981291 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:36:50.981298 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:36:50.981305 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:36:50.981312 | orchestrator | 2025-09-19 11:36:50.981319 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 11:36:50.981325 | orchestrator | Friday 19 September 2025 11:36:39 +0000 (0:01:18.099) 0:02:40.470 ****** 2025-09-19 11:36:50.981332 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:36:50.981340 | orchestrator | 2025-09-19 11:36:50.981346 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-19 11:36:50.981358 | orchestrator | Friday 19 September 2025 11:36:40 +0000 (0:00:00.657) 0:02:41.127 ****** 2025-09-19 11:36:50.981365 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:36:50.981372 | orchestrator | 2025-09-19 11:36:50.981379 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-19 11:36:50.981385 | orchestrator | Friday 19 September 2025 11:36:42 +0000 (0:00:02.462) 0:02:43.590 ****** 2025-09-19 11:36:50.981395 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:36:50.981401 | orchestrator | 2025-09-19 11:36:50.981408 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-19 11:36:50.981416 | orchestrator | Friday 19 September 2025 11:36:44 +0000 (0:00:02.319) 0:02:45.909 ****** 2025-09-19 11:36:50.981423 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:36:50.981431 | orchestrator | 2025-09-19 11:36:50.981438 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-19 11:36:50.981446 | orchestrator | Friday 19 September 2025 11:36:47 +0000 (0:00:02.944) 0:02:48.854 ****** 2025-09-19 11:36:50.981454 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:36:50.981462 | orchestrator | 2025-09-19 11:36:50.981474 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:36:50.981483 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:36:50.981491 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:36:50.981498 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 11:36:50.981505 | orchestrator | 2025-09-19 11:36:50.981512 | orchestrator | 2025-09-19 11:36:50.981519 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:36:50.981526 | orchestrator | Friday 19 September 2025 11:36:50 +0000 (0:00:02.700) 0:02:51.554 ****** 2025-09-19 11:36:50.981533 | orchestrator | =============================================================================== 2025-09-19 11:36:50.981539 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 78.10s 2025-09-19 11:36:50.981559 | orchestrator | opensearch : Restart opensearch container ------------------------------ 59.83s 2025-09-19 11:36:50.981566 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.34s 2025-09-19 11:36:50.981573 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.15s 2025-09-19 11:36:50.981580 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.94s 2025-09-19 11:36:50.981589 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.70s 2025-09-19 11:36:50.981596 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.67s 2025-09-19 11:36:50.981603 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.46s 2025-09-19 11:36:50.981610 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.32s 2025-09-19 11:36:50.981617 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.11s 2025-09-19 11:36:50.981623 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.08s 2025-09-19 11:36:50.981629 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.05s 2025-09-19 11:36:50.981636 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.65s 2025-09-19 11:36:50.981643 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.05s 2025-09-19 11:36:50.981650 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.99s 2025-09-19 11:36:50.981657 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.66s 2025-09-19 11:36:50.981663 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-09-19 11:36:50.981674 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-09-19 11:36:50.981681 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.45s 2025-09-19 11:36:50.981687 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-09-19 11:36:50.981694 | orchestrator | 2025-09-19 11:36:50 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:50.982981 | orchestrator | 2025-09-19 11:36:50 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:50.983078 | orchestrator | 2025-09-19 11:36:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:54.031020 | orchestrator | 2025-09-19 11:36:54 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:54.033040 | orchestrator | 2025-09-19 11:36:54 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:54.033086 | orchestrator | 2025-09-19 11:36:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:36:57.077108 | orchestrator | 2025-09-19 11:36:57 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:36:57.079096 | orchestrator | 2025-09-19 11:36:57 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:36:57.079577 | orchestrator | 2025-09-19 11:36:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:00.123438 | orchestrator | 2025-09-19 11:37:00 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:00.125760 | orchestrator | 2025-09-19 11:37:00 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:37:00.126124 | orchestrator | 2025-09-19 11:37:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:03.175851 | orchestrator | 2025-09-19 11:37:03 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:03.178385 | orchestrator | 2025-09-19 11:37:03 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:37:03.178603 | orchestrator | 2025-09-19 11:37:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:06.227488 | orchestrator | 2025-09-19 11:37:06 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:06.231353 | orchestrator | 2025-09-19 11:37:06 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:37:06.231387 | orchestrator | 2025-09-19 11:37:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:09.272099 | orchestrator | 2025-09-19 11:37:09 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:09.274863 | orchestrator | 2025-09-19 11:37:09 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:37:09.274914 | orchestrator | 2025-09-19 11:37:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:12.322559 | orchestrator | 2025-09-19 11:37:12 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:12.324368 | orchestrator | 2025-09-19 11:37:12 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state STARTED 2025-09-19 11:37:12.324408 | orchestrator | 2025-09-19 11:37:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:15.369652 | orchestrator | 2025-09-19 11:37:15 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:15.375765 | orchestrator | 2025-09-19 11:37:15 | INFO  | Task 1a2e602c-f94f-4786-a59a-43a58368efc7 is in state SUCCESS 2025-09-19 11:37:15.376683 | orchestrator | 2025-09-19 11:37:15.376708 | orchestrator | 2025-09-19 11:37:15.376713 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-19 11:37:15.376737 | orchestrator | 2025-09-19 11:37:15.376741 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-19 11:37:15.376745 | orchestrator | Friday 19 September 2025 11:33:59 +0000 (0:00:00.114) 0:00:00.114 ****** 2025-09-19 11:37:15.376749 | orchestrator | ok: [localhost] => { 2025-09-19 11:37:15.376755 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-19 11:37:15.376760 | orchestrator | } 2025-09-19 11:37:15.376789 | orchestrator | 2025-09-19 11:37:15.376794 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-19 11:37:15.376799 | orchestrator | Friday 19 September 2025 11:33:59 +0000 (0:00:00.060) 0:00:00.174 ****** 2025-09-19 11:37:15.376804 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-19 11:37:15.376809 | orchestrator | ...ignoring 2025-09-19 11:37:15.376814 | orchestrator | 2025-09-19 11:37:15.376818 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-19 11:37:15.376822 | orchestrator | Friday 19 September 2025 11:34:02 +0000 (0:00:02.865) 0:00:03.040 ****** 2025-09-19 11:37:15.376825 | orchestrator | skipping: [localhost] 2025-09-19 11:37:15.376829 | orchestrator | 2025-09-19 11:37:15.376833 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-19 11:37:15.376837 | orchestrator | Friday 19 September 2025 11:34:02 +0000 (0:00:00.058) 0:00:03.099 ****** 2025-09-19 11:37:15.376841 | orchestrator | ok: [localhost] 2025-09-19 11:37:15.376844 | orchestrator | 2025-09-19 11:37:15.376848 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:37:15.376852 | orchestrator | 2025-09-19 11:37:15.376856 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:37:15.376859 | orchestrator | Friday 19 September 2025 11:34:02 +0000 (0:00:00.158) 0:00:03.258 ****** 2025-09-19 11:37:15.376863 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:37:15.376867 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:37:15.376871 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:37:15.376874 | orchestrator | 2025-09-19 11:37:15.376878 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:37:15.376882 | orchestrator | Friday 19 September 2025 11:34:02 +0000 (0:00:00.493) 0:00:03.751 ****** 2025-09-19 11:37:15.376999 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-19 11:37:15.377007 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-19 11:37:15.377011 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-19 11:37:15.377014 | orchestrator | 2025-09-19 11:37:15.377018 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-19 11:37:15.377022 | orchestrator | 2025-09-19 11:37:15.377026 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-19 11:37:15.377029 | orchestrator | Friday 19 September 2025 11:34:03 +0000 (0:00:00.993) 0:00:04.745 ****** 2025-09-19 11:37:15.377033 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 11:37:15.377037 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 11:37:15.377040 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 11:37:15.377044 | orchestrator | 2025-09-19 11:37:15.377048 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 11:37:15.377052 | orchestrator | Friday 19 September 2025 11:34:04 +0000 (0:00:00.395) 0:00:05.140 ****** 2025-09-19 11:37:15.377066 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:37:15.377071 | orchestrator | 2025-09-19 11:37:15.377074 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-19 11:37:15.377078 | orchestrator | Friday 19 September 2025 11:34:04 +0000 (0:00:00.547) 0:00:05.688 ****** 2025-09-19 11:37:15.377094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:37:15.377106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:37:15.377114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:37:15.377122 | orchestrator | 2025-09-19 11:37:15.377130 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-19 11:37:15.377134 | orchestrator | Friday 19 September 2025 11:34:08 +0000 (0:00:03.347) 0:00:09.036 ****** 2025-09-19 11:37:15.377138 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:37:15.377142 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.377145 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.377149 | orchestrator | 2025-09-19 11:37:15.377153 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-19 11:37:15.377156 | orchestrator | Friday 19 September 2025 11:34:09 +0000 (0:00:01.048) 0:00:10.084 ****** 2025-09-19 11:37:15.377160 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.377164 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.377168 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:37:15.377171 | orchestrator | 2025-09-19 11:37:15.377175 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-19 11:37:15.377179 | orchestrator | Friday 19 September 2025 11:34:10 +0000 (0:00:01.579) 0:00:11.664 ****** 2025-09-19 11:37:15.377185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:37:15.377197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:37:15.377202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:37:15.377210 | orchestrator | 2025-09-19 11:37:15.377213 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-19 11:37:15.377217 | orchestrator | Friday 19 September 2025 11:34:14 +0000 (0:00:04.261) 0:00:15.926 ****** 2025-09-19 11:37:15.377221 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.377225 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.377228 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:37:15.377232 | orchestrator | 2025-09-19 11:37:15.377238 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-19 11:37:15.377242 | orchestrator | Friday 19 September 2025 11:34:16 +0000 (0:00:01.317) 0:00:17.243 ****** 2025-09-19 11:37:15.377256 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:37:15.377260 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:37:15.377264 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:37:15.377267 | orchestrator | 2025-09-19 11:37:15.377271 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 11:37:15.377275 | orchestrator | Friday 19 September 2025 11:34:20 +0000 (0:00:04.559) 0:00:21.803 ****** 2025-09-19 11:37:15.377278 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:37:15.377282 | orchestrator | 2025-09-19 11:37:15.377286 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-19 11:37:15.377290 | orchestrator | Friday 19 September 2025 11:34:21 +0000 (0:00:00.544) 0:00:22.347 ****** 2025-09-19 11:37:15.377298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:37:15.377303 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.377309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:37:15.377319 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:37:15.377328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:37:15.377333 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.377336 | orchestrator | 2025-09-19 11:37:15.377340 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-19 11:37:15.377344 | orchestrator | Friday 19 September 2025 11:34:25 +0000 (0:00:04.496) 0:00:26.843 ****** 2025-09-19 11:37:15.377350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:37:15.377358 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:37:15.377366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:37:15.377370 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.377374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:37:15.377382 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.377386 | orchestrator | 2025-09-19 11:37:15.377390 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-19 11:37:15.377394 | orchestrator | Friday 19 September 2025 11:34:28 +0000 (0:00:02.896) 0:00:29.739 ****** 2025-09-19 11:37:15.377406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:37:15.377410 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.377414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:37:15.377422 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.377428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 11:37:15.377432 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:37:15.377436 | orchestrator | 2025-09-19 11:37:15.377440 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-19 11:37:15.377444 | orchestrator | Friday 19 September 2025 11:34:31 +0000 (0:00:02.979) 0:00:32.719 ****** 2025-09-19 11:37:15.377450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:37:15.377461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:37:15.377469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 11:37:15.377477 | orchestrator | 2025-09-19 11:37:15.377481 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-19 11:37:15.377484 | orchestrator | Friday 19 September 2025 11:34:35 +0000 (0:00:04.055) 0:00:36.774 ****** 2025-09-19 11:37:15.377488 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:37:15.377492 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:37:15.377496 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:37:15.377499 | orchestrator | 2025-09-19 11:37:15.377503 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-19 11:37:15.377549 | orchestrator | Friday 19 September 2025 11:34:37 +0000 (0:00:01.266) 0:00:38.040 ****** 2025-09-19 11:37:15.377553 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:37:15.377557 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:37:15.377560 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:37:15.377564 | orchestrator | 2025-09-19 11:37:15.377568 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-19 11:37:15.377571 | orchestrator | Friday 19 September 2025 11:34:37 +0000 (0:00:00.335) 0:00:38.376 ****** 2025-09-19 11:37:15.377575 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:37:15.377579 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:37:15.377583 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:37:15.377586 | orchestrator | 2025-09-19 11:37:15.377590 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-19 11:37:15.377594 | orchestrator | Friday 19 September 2025 11:34:37 +0000 (0:00:00.482) 0:00:38.859 ****** 2025-09-19 11:37:15.377602 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-19 11:37:15.377606 | orchestrator | ...ignoring 2025-09-19 11:37:15.377610 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-19 11:37:15.377614 | orchestrator | ...ignoring 2025-09-19 11:37:15.377618 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-19 11:37:15.377621 | orchestrator | ...ignoring 2025-09-19 11:37:15.377625 | orchestrator | 2025-09-19 11:37:15.377629 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-19 11:37:15.377634 | orchestrator | Friday 19 September 2025 11:34:49 +0000 (0:00:11.288) 0:00:50.148 ****** 2025-09-19 11:37:15.377638 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:37:15.377642 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:37:15.377646 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:37:15.377650 | orchestrator | 2025-09-19 11:37:15.377654 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-19 11:37:15.377659 | orchestrator | Friday 19 September 2025 11:34:50 +0000 (0:00:00.913) 0:00:51.061 ****** 2025-09-19 11:37:15.377663 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:37:15.377667 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.377671 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.377675 | orchestrator | 2025-09-19 11:37:15.377679 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-19 11:37:15.377684 | orchestrator | Friday 19 September 2025 11:34:50 +0000 (0:00:00.458) 0:00:51.520 ****** 2025-09-19 11:37:15.377691 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:37:15.377695 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.377700 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.377704 | orchestrator | 2025-09-19 11:37:15.377708 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-19 11:37:15.377712 | orchestrator | Friday 19 September 2025 11:34:51 +0000 (0:00:00.472) 0:00:51.993 ****** 2025-09-19 11:37:15.377716 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:37:15.377720 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.377725 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.377729 | orchestrator | 2025-09-19 11:37:15.377733 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-19 11:37:15.377740 | orchestrator | Friday 19 September 2025 11:34:51 +0000 (0:00:00.473) 0:00:52.467 ****** 2025-09-19 11:37:15.377744 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:37:15.377748 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:37:15.377753 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:37:15.377756 | orchestrator | 2025-09-19 11:37:15.377761 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-19 11:37:15.377765 | orchestrator | Friday 19 September 2025 11:34:52 +0000 (0:00:00.683) 0:00:53.150 ****** 2025-09-19 11:37:15.377769 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:37:15.377773 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.377778 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.377782 | orchestrator | 2025-09-19 11:37:15.377786 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 11:37:15.377790 | orchestrator | Friday 19 September 2025 11:34:52 +0000 (0:00:00.448) 0:00:53.598 ****** 2025-09-19 11:37:15.377794 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.377798 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.377802 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-19 11:37:15.377807 | orchestrator | 2025-09-19 11:37:15.377811 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-19 11:37:15.377815 | orchestrator | Friday 19 September 2025 11:34:53 +0000 (0:00:00.401) 0:00:53.999 ****** 2025-09-19 11:37:15.377819 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:37:15.377823 | orchestrator | 2025-09-19 11:37:15.377827 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-19 11:37:15.377831 | orchestrator | Friday 19 September 2025 11:35:03 +0000 (0:00:10.537) 0:01:04.537 ****** 2025-09-19 11:37:15.377835 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:37:15.377839 | orchestrator | 2025-09-19 11:37:15.377844 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 11:37:15.377847 | orchestrator | Friday 19 September 2025 11:35:03 +0000 (0:00:00.130) 0:01:04.668 ****** 2025-09-19 11:37:15.377851 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:37:15.377856 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.377860 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.377864 | orchestrator | 2025-09-19 11:37:15.377868 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-19 11:37:15.377872 | orchestrator | Friday 19 September 2025 11:35:04 +0000 (0:00:01.042) 0:01:05.710 ****** 2025-09-19 11:37:15.377876 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:37:15.377880 | orchestrator | 2025-09-19 11:37:15.377884 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-19 11:37:15.377888 | orchestrator | Friday 19 September 2025 11:35:12 +0000 (0:00:07.457) 0:01:13.169 ****** 2025-09-19 11:37:15.377893 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:37:15.377897 | orchestrator | 2025-09-19 11:37:15.377901 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-19 11:37:15.377905 | orchestrator | Friday 19 September 2025 11:35:13 +0000 (0:00:01.623) 0:01:14.792 ****** 2025-09-19 11:37:15.377909 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:37:15.377917 | orchestrator | 2025-09-19 11:37:15.377921 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-19 11:37:15.377925 | orchestrator | Friday 19 September 2025 11:35:16 +0000 (0:00:02.668) 0:01:17.461 ****** 2025-09-19 11:37:15.377929 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:37:15.377934 | orchestrator | 2025-09-19 11:37:15.377938 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-19 11:37:15.377942 | orchestrator | Friday 19 September 2025 11:35:16 +0000 (0:00:00.131) 0:01:17.592 ****** 2025-09-19 11:37:15.377946 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:37:15.377953 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.377958 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.377962 | orchestrator | 2025-09-19 11:37:15.377966 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-19 11:37:15.377970 | orchestrator | Friday 19 September 2025 11:35:17 +0000 (0:00:00.529) 0:01:18.122 ****** 2025-09-19 11:37:15.377975 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:37:15.377979 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-19 11:37:15.377983 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:37:15.377987 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:37:15.377991 | orchestrator | 2025-09-19 11:37:15.377995 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-19 11:37:15.378000 | orchestrator | skipping: no hosts matched 2025-09-19 11:37:15.378004 | orchestrator | 2025-09-19 11:37:15.378008 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 11:37:15.378012 | orchestrator | 2025-09-19 11:37:15.378040 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 11:37:15.378044 | orchestrator | Friday 19 September 2025 11:35:17 +0000 (0:00:00.379) 0:01:18.501 ****** 2025-09-19 11:37:15.378048 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:37:15.378052 | orchestrator | 2025-09-19 11:37:15.378055 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 11:37:15.378059 | orchestrator | Friday 19 September 2025 11:35:41 +0000 (0:00:24.276) 0:01:42.778 ****** 2025-09-19 11:37:15.378063 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:37:15.378067 | orchestrator | 2025-09-19 11:37:15.378070 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 11:37:15.378074 | orchestrator | Friday 19 September 2025 11:35:57 +0000 (0:00:15.639) 0:01:58.418 ****** 2025-09-19 11:37:15.378078 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:37:15.378081 | orchestrator | 2025-09-19 11:37:15.378085 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 11:37:15.378089 | orchestrator | 2025-09-19 11:37:15.378093 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 11:37:15.378096 | orchestrator | Friday 19 September 2025 11:36:00 +0000 (0:00:02.570) 0:02:00.989 ****** 2025-09-19 11:37:15.378100 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:37:15.378104 | orchestrator | 2025-09-19 11:37:15.378108 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 11:37:15.378117 | orchestrator | Friday 19 September 2025 11:36:24 +0000 (0:00:24.820) 0:02:25.809 ****** 2025-09-19 11:37:15.378123 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:37:15.378128 | orchestrator | 2025-09-19 11:37:15.378137 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 11:37:15.378144 | orchestrator | Friday 19 September 2025 11:36:41 +0000 (0:00:16.559) 0:02:42.369 ****** 2025-09-19 11:37:15.378150 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:37:15.378155 | orchestrator | 2025-09-19 11:37:15.378162 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-19 11:37:15.378168 | orchestrator | 2025-09-19 11:37:15.378174 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 11:37:15.378179 | orchestrator | Friday 19 September 2025 11:36:44 +0000 (0:00:02.669) 0:02:45.038 ****** 2025-09-19 11:37:15.378190 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:37:15.378196 | orchestrator | 2025-09-19 11:37:15.378202 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 11:37:15.378207 | orchestrator | Friday 19 September 2025 11:36:54 +0000 (0:00:10.822) 0:02:55.860 ****** 2025-09-19 11:37:15.378212 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:37:15.378217 | orchestrator | 2025-09-19 11:37:15.378223 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 11:37:15.378228 | orchestrator | Friday 19 September 2025 11:36:59 +0000 (0:00:04.660) 0:03:00.521 ****** 2025-09-19 11:37:15.378234 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:37:15.378239 | orchestrator | 2025-09-19 11:37:15.378244 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-19 11:37:15.378250 | orchestrator | 2025-09-19 11:37:15.378255 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-19 11:37:15.378260 | orchestrator | Friday 19 September 2025 11:37:02 +0000 (0:00:02.501) 0:03:03.023 ****** 2025-09-19 11:37:15.378265 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:37:15.378270 | orchestrator | 2025-09-19 11:37:15.378275 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-19 11:37:15.378281 | orchestrator | Friday 19 September 2025 11:37:02 +0000 (0:00:00.501) 0:03:03.525 ****** 2025-09-19 11:37:15.378286 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.378292 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.378298 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:37:15.378304 | orchestrator | 2025-09-19 11:37:15.378310 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-19 11:37:15.378315 | orchestrator | Friday 19 September 2025 11:37:05 +0000 (0:00:02.572) 0:03:06.097 ****** 2025-09-19 11:37:15.378321 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.378327 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.378332 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:37:15.378338 | orchestrator | 2025-09-19 11:37:15.378344 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-19 11:37:15.378350 | orchestrator | Friday 19 September 2025 11:37:07 +0000 (0:00:02.194) 0:03:08.292 ****** 2025-09-19 11:37:15.378355 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.378361 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.378367 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:37:15.378372 | orchestrator | 2025-09-19 11:37:15.378378 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-19 11:37:15.378384 | orchestrator | Friday 19 September 2025 11:37:09 +0000 (0:00:02.146) 0:03:10.439 ****** 2025-09-19 11:37:15.378390 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.378396 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.378402 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:37:15.378408 | orchestrator | 2025-09-19 11:37:15.378414 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-19 11:37:15.378422 | orchestrator | Friday 19 September 2025 11:37:11 +0000 (0:00:02.194) 0:03:12.633 ****** 2025-09-19 11:37:15.378425 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:37:15.378429 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:37:15.378433 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:37:15.378436 | orchestrator | 2025-09-19 11:37:15.378440 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-19 11:37:15.378444 | orchestrator | Friday 19 September 2025 11:37:14 +0000 (0:00:02.954) 0:03:15.587 ****** 2025-09-19 11:37:15.378448 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:37:15.378451 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:37:15.378455 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:37:15.378458 | orchestrator | 2025-09-19 11:37:15.378462 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:37:15.378466 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-19 11:37:15.378475 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-19 11:37:15.378480 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-19 11:37:15.378484 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-19 11:37:15.378487 | orchestrator | 2025-09-19 11:37:15.378491 | orchestrator | 2025-09-19 11:37:15.378495 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:37:15.378498 | orchestrator | Friday 19 September 2025 11:37:14 +0000 (0:00:00.232) 0:03:15.819 ****** 2025-09-19 11:37:15.378502 | orchestrator | =============================================================================== 2025-09-19 11:37:15.378519 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 49.10s 2025-09-19 11:37:15.378523 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.20s 2025-09-19 11:37:15.378532 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.29s 2025-09-19 11:37:15.378536 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.82s 2025-09-19 11:37:15.378540 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.54s 2025-09-19 11:37:15.378543 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.46s 2025-09-19 11:37:15.378547 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.24s 2025-09-19 11:37:15.378550 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.66s 2025-09-19 11:37:15.378554 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.56s 2025-09-19 11:37:15.378558 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.50s 2025-09-19 11:37:15.378561 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.26s 2025-09-19 11:37:15.378565 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.06s 2025-09-19 11:37:15.378569 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.35s 2025-09-19 11:37:15.378572 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.98s 2025-09-19 11:37:15.378576 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.95s 2025-09-19 11:37:15.378580 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.90s 2025-09-19 11:37:15.378583 | orchestrator | Check MariaDB service --------------------------------------------------- 2.87s 2025-09-19 11:37:15.378587 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.67s 2025-09-19 11:37:15.378591 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.57s 2025-09-19 11:37:15.378594 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.50s 2025-09-19 11:37:15.378598 | orchestrator | 2025-09-19 11:37:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:18.427770 | orchestrator | 2025-09-19 11:37:18 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:18.430663 | orchestrator | 2025-09-19 11:37:18 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:18.432962 | orchestrator | 2025-09-19 11:37:18 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:18.433147 | orchestrator | 2025-09-19 11:37:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:21.479984 | orchestrator | 2025-09-19 11:37:21 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:21.481708 | orchestrator | 2025-09-19 11:37:21 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:21.484276 | orchestrator | 2025-09-19 11:37:21 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:21.484319 | orchestrator | 2025-09-19 11:37:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:24.527458 | orchestrator | 2025-09-19 11:37:24 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:24.527955 | orchestrator | 2025-09-19 11:37:24 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:24.529311 | orchestrator | 2025-09-19 11:37:24 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:24.529436 | orchestrator | 2025-09-19 11:37:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:27.563368 | orchestrator | 2025-09-19 11:37:27 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:27.564718 | orchestrator | 2025-09-19 11:37:27 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:27.566579 | orchestrator | 2025-09-19 11:37:27 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:27.566793 | orchestrator | 2025-09-19 11:37:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:30.604158 | orchestrator | 2025-09-19 11:37:30 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:30.605316 | orchestrator | 2025-09-19 11:37:30 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:30.607000 | orchestrator | 2025-09-19 11:37:30 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:30.607238 | orchestrator | 2025-09-19 11:37:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:33.639241 | orchestrator | 2025-09-19 11:37:33 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:33.639387 | orchestrator | 2025-09-19 11:37:33 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:33.641170 | orchestrator | 2025-09-19 11:37:33 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:33.641236 | orchestrator | 2025-09-19 11:37:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:36.691447 | orchestrator | 2025-09-19 11:37:36 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:36.693631 | orchestrator | 2025-09-19 11:37:36 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:36.696197 | orchestrator | 2025-09-19 11:37:36 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:36.696371 | orchestrator | 2025-09-19 11:37:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:39.735914 | orchestrator | 2025-09-19 11:37:39 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:39.738264 | orchestrator | 2025-09-19 11:37:39 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:39.739536 | orchestrator | 2025-09-19 11:37:39 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:39.739561 | orchestrator | 2025-09-19 11:37:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:42.766819 | orchestrator | 2025-09-19 11:37:42 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:42.767213 | orchestrator | 2025-09-19 11:37:42 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:42.768451 | orchestrator | 2025-09-19 11:37:42 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:42.768507 | orchestrator | 2025-09-19 11:37:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:45.822153 | orchestrator | 2025-09-19 11:37:45 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:45.823127 | orchestrator | 2025-09-19 11:37:45 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:45.823678 | orchestrator | 2025-09-19 11:37:45 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:45.823822 | orchestrator | 2025-09-19 11:37:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:48.873522 | orchestrator | 2025-09-19 11:37:48 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:48.875821 | orchestrator | 2025-09-19 11:37:48 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:48.877949 | orchestrator | 2025-09-19 11:37:48 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:48.877977 | orchestrator | 2025-09-19 11:37:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:51.911870 | orchestrator | 2025-09-19 11:37:51 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:51.912171 | orchestrator | 2025-09-19 11:37:51 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:51.913017 | orchestrator | 2025-09-19 11:37:51 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:51.913053 | orchestrator | 2025-09-19 11:37:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:54.946570 | orchestrator | 2025-09-19 11:37:54 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:54.950137 | orchestrator | 2025-09-19 11:37:54 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:54.951408 | orchestrator | 2025-09-19 11:37:54 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state STARTED 2025-09-19 11:37:54.951433 | orchestrator | 2025-09-19 11:37:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:57.987530 | orchestrator | 2025-09-19 11:37:57 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:37:57.990643 | orchestrator | 2025-09-19 11:37:57 | INFO  | Task 730241fa-0b39-4581-92b2-1d95fb52e67b is in state STARTED 2025-09-19 11:37:57.994531 | orchestrator | 2025-09-19 11:37:57 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:37:57.997689 | orchestrator | 2025-09-19 11:37:57 | INFO  | Task 397b181c-ee4d-42bc-af93-0ded64c7c89d is in state SUCCESS 2025-09-19 11:37:57.999725 | orchestrator | 2025-09-19 11:37:57.999782 | orchestrator | 2025-09-19 11:37:57.999797 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-19 11:37:57.999809 | orchestrator | 2025-09-19 11:37:57.999820 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-19 11:37:57.999832 | orchestrator | Friday 19 September 2025 11:35:46 +0000 (0:00:00.594) 0:00:00.594 ****** 2025-09-19 11:37:57.999848 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:37:57.999868 | orchestrator | 2025-09-19 11:37:57.999885 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-19 11:37:57.999902 | orchestrator | Friday 19 September 2025 11:35:47 +0000 (0:00:00.641) 0:00:01.235 ****** 2025-09-19 11:37:57.999921 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:57.999974 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:57.999995 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.000014 | orchestrator | 2025-09-19 11:37:58.000033 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-19 11:37:58.000051 | orchestrator | Friday 19 September 2025 11:35:48 +0000 (0:00:00.689) 0:00:01.924 ****** 2025-09-19 11:37:58.000069 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.000086 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:58.000103 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:58.000122 | orchestrator | 2025-09-19 11:37:58.000143 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-19 11:37:58.000846 | orchestrator | Friday 19 September 2025 11:35:48 +0000 (0:00:00.279) 0:00:02.204 ****** 2025-09-19 11:37:58.000896 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.000917 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:58.000934 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:58.001223 | orchestrator | 2025-09-19 11:37:58.001246 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-19 11:37:58.001257 | orchestrator | Friday 19 September 2025 11:35:49 +0000 (0:00:00.776) 0:00:02.980 ****** 2025-09-19 11:37:58.001268 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.001279 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:58.001289 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:58.001300 | orchestrator | 2025-09-19 11:37:58.001311 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-19 11:37:58.001321 | orchestrator | Friday 19 September 2025 11:35:49 +0000 (0:00:00.319) 0:00:03.299 ****** 2025-09-19 11:37:58.001332 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.001343 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:58.001353 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:58.001364 | orchestrator | 2025-09-19 11:37:58.001374 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-19 11:37:58.001385 | orchestrator | Friday 19 September 2025 11:35:49 +0000 (0:00:00.314) 0:00:03.614 ****** 2025-09-19 11:37:58.001395 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.001406 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:58.001417 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:58.001427 | orchestrator | 2025-09-19 11:37:58.001438 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-19 11:37:58.001480 | orchestrator | Friday 19 September 2025 11:35:50 +0000 (0:00:00.351) 0:00:03.966 ****** 2025-09-19 11:37:58.001492 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.001509 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.001529 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.001546 | orchestrator | 2025-09-19 11:37:58.001565 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-19 11:37:58.001584 | orchestrator | Friday 19 September 2025 11:35:50 +0000 (0:00:00.512) 0:00:04.478 ****** 2025-09-19 11:37:58.001601 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.001620 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:58.001638 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:58.001656 | orchestrator | 2025-09-19 11:37:58.001675 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-19 11:37:58.001693 | orchestrator | Friday 19 September 2025 11:35:51 +0000 (0:00:00.292) 0:00:04.771 ****** 2025-09-19 11:37:58.001714 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:37:58.001758 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:37:58.001780 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:37:58.001799 | orchestrator | 2025-09-19 11:37:58.001820 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-19 11:37:58.001840 | orchestrator | Friday 19 September 2025 11:35:51 +0000 (0:00:00.631) 0:00:05.402 ****** 2025-09-19 11:37:58.001859 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.001900 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:58.001922 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:58.001943 | orchestrator | 2025-09-19 11:37:58.001963 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-19 11:37:58.001985 | orchestrator | Friday 19 September 2025 11:35:52 +0000 (0:00:00.446) 0:00:05.849 ****** 2025-09-19 11:37:58.002005 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:37:58.002095 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:37:58.002110 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:37:58.002122 | orchestrator | 2025-09-19 11:37:58.002134 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-19 11:37:58.002147 | orchestrator | Friday 19 September 2025 11:35:54 +0000 (0:00:02.244) 0:00:08.095 ****** 2025-09-19 11:37:58.002160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 11:37:58.002172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 11:37:58.002185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 11:37:58.002197 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.002207 | orchestrator | 2025-09-19 11:37:58.002218 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-19 11:37:58.002369 | orchestrator | Friday 19 September 2025 11:35:54 +0000 (0:00:00.399) 0:00:08.494 ****** 2025-09-19 11:37:58.002389 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.002404 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.002415 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.002426 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.002436 | orchestrator | 2025-09-19 11:37:58.002447 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-19 11:37:58.002483 | orchestrator | Friday 19 September 2025 11:35:55 +0000 (0:00:00.807) 0:00:09.302 ****** 2025-09-19 11:37:58.002496 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.002510 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.002521 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.002545 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.002556 | orchestrator | 2025-09-19 11:37:58.002567 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-19 11:37:58.002577 | orchestrator | Friday 19 September 2025 11:35:55 +0000 (0:00:00.152) 0:00:09.454 ****** 2025-09-19 11:37:58.002599 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd78e8231ce7e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-19 11:35:52.829897', 'end': '2025-09-19 11:35:52.869351', 'delta': '0:00:00.039454', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d78e8231ce7e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-19 11:37:58.002614 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cd76542fbd87', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-19 11:35:53.614740', 'end': '2025-09-19 11:35:53.658710', 'delta': '0:00:00.043970', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cd76542fbd87'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-19 11:37:58.002659 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6130b7b77df6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-19 11:35:54.164167', 'end': '2025-09-19 11:35:54.207216', 'delta': '0:00:00.043049', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6130b7b77df6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-19 11:37:58.002672 | orchestrator | 2025-09-19 11:37:58.002683 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-19 11:37:58.002694 | orchestrator | Friday 19 September 2025 11:35:56 +0000 (0:00:00.395) 0:00:09.849 ****** 2025-09-19 11:37:58.002704 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.002715 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:58.002726 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:58.002736 | orchestrator | 2025-09-19 11:37:58.002747 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-19 11:37:58.002758 | orchestrator | Friday 19 September 2025 11:35:56 +0000 (0:00:00.456) 0:00:10.306 ****** 2025-09-19 11:37:58.002797 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-19 11:37:58.002808 | orchestrator | 2025-09-19 11:37:58.002819 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-19 11:37:58.002830 | orchestrator | Friday 19 September 2025 11:35:58 +0000 (0:00:01.752) 0:00:12.058 ****** 2025-09-19 11:37:58.002840 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.002851 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.002862 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.002872 | orchestrator | 2025-09-19 11:37:58.002883 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-19 11:37:58.002894 | orchestrator | Friday 19 September 2025 11:35:58 +0000 (0:00:00.305) 0:00:12.364 ****** 2025-09-19 11:37:58.002914 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.002925 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.002936 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.002947 | orchestrator | 2025-09-19 11:37:58.002960 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 11:37:58.002972 | orchestrator | Friday 19 September 2025 11:35:59 +0000 (0:00:00.443) 0:00:12.807 ****** 2025-09-19 11:37:58.002985 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.002997 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.003010 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.003023 | orchestrator | 2025-09-19 11:37:58.003035 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-19 11:37:58.003048 | orchestrator | Friday 19 September 2025 11:35:59 +0000 (0:00:00.529) 0:00:13.336 ****** 2025-09-19 11:37:58.003061 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.003073 | orchestrator | 2025-09-19 11:37:58.003085 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-19 11:37:58.003098 | orchestrator | Friday 19 September 2025 11:35:59 +0000 (0:00:00.132) 0:00:13.469 ****** 2025-09-19 11:37:58.003110 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.003122 | orchestrator | 2025-09-19 11:37:58.003134 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 11:37:58.003146 | orchestrator | Friday 19 September 2025 11:35:59 +0000 (0:00:00.217) 0:00:13.687 ****** 2025-09-19 11:37:58.003158 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.003171 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.003183 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.003196 | orchestrator | 2025-09-19 11:37:58.003208 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-19 11:37:58.003226 | orchestrator | Friday 19 September 2025 11:36:00 +0000 (0:00:00.284) 0:00:13.971 ****** 2025-09-19 11:37:58.003239 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.003251 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.003264 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.003276 | orchestrator | 2025-09-19 11:37:58.003289 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-19 11:37:58.003302 | orchestrator | Friday 19 September 2025 11:36:00 +0000 (0:00:00.321) 0:00:14.292 ****** 2025-09-19 11:37:58.003314 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.003325 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.003335 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.003346 | orchestrator | 2025-09-19 11:37:58.003356 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-19 11:37:58.003367 | orchestrator | Friday 19 September 2025 11:36:01 +0000 (0:00:00.512) 0:00:14.805 ****** 2025-09-19 11:37:58.003378 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.003388 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.003399 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.003410 | orchestrator | 2025-09-19 11:37:58.003420 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-19 11:37:58.003431 | orchestrator | Friday 19 September 2025 11:36:01 +0000 (0:00:00.344) 0:00:15.149 ****** 2025-09-19 11:37:58.003441 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.003477 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.003488 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.003499 | orchestrator | 2025-09-19 11:37:58.003510 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-19 11:37:58.003520 | orchestrator | Friday 19 September 2025 11:36:01 +0000 (0:00:00.347) 0:00:15.497 ****** 2025-09-19 11:37:58.003531 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.003541 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.003552 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.003563 | orchestrator | 2025-09-19 11:37:58.003574 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-19 11:37:58.003623 | orchestrator | Friday 19 September 2025 11:36:02 +0000 (0:00:00.345) 0:00:15.842 ****** 2025-09-19 11:37:58.003636 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.003647 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.003657 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.003668 | orchestrator | 2025-09-19 11:37:58.003678 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-19 11:37:58.003689 | orchestrator | Friday 19 September 2025 11:36:02 +0000 (0:00:00.517) 0:00:16.359 ****** 2025-09-19 11:37:58.003702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f2e5a9ae--16db--5885--a5f1--5293896cd0a9-osd--block--f2e5a9ae--16db--5885--a5f1--5293896cd0a9', 'dm-uuid-LVM-T0qtfsVXAM2pxgkSZHPOh8wOanAOcnyXtrQDNWKQpMdeLKVaBer12Y5MriBAgVYI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.003714 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5-osd--block--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5', 'dm-uuid-LVM-u2rmXfbzi0TuTIdRJEkihfDRShJacu7nwni3ibQB2pd4SpbFkYjAfzf4Sfdt0x2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.003726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.003737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.003753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.003765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.003776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.003818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.003831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.003842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.003861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.003877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f2e5a9ae--16db--5885--a5f1--5293896cd0a9-osd--block--f2e5a9ae--16db--5885--a5f1--5293896cd0a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BdEUeW-T1x2-3zEI-sGKj-LbaC-JGTN-0d2P5Z', 'scsi-0QEMU_QEMU_HARDDISK_729b54dd-f4c1-4a98-9e39-7aa2dbdf058c', 'scsi-SQEMU_QEMU_HARDDISK_729b54dd-f4c1-4a98-9e39-7aa2dbdf058c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.003922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5-osd--block--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OD12ed-tnfe-q2vC-3MJo-XQuM-2lVY-yBnEkJ', 'scsi-0QEMU_QEMU_HARDDISK_ff354216-c1d2-4110-b9e3-f4cf06b21a62', 'scsi-SQEMU_QEMU_HARDDISK_ff354216-c1d2-4110-b9e3-f4cf06b21a62'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.003936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2859ea6e-5cf3-4595-8353-f67711d21d4e', 'scsi-SQEMU_QEMU_HARDDISK_2859ea6e-5cf3-4595-8353-f67711d21d4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.003948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--499bb3ba--5d36--55d4--9ab4--77fea8769c5a-osd--block--499bb3ba--5d36--55d4--9ab4--77fea8769c5a', 'dm-uuid-LVM-sKkYbBtPH7TYB3qfRwMoXcTlubcZTSnUwbyaQ36SqEI2lNR4qCbIkTanXU63GGfj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.003961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.003977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--482defc3--95b3--50a2--a4e9--5dea1f7a25a6-osd--block--482defc3--95b3--50a2--a4e9--5dea1f7a25a6', 'dm-uuid-LVM-Sl0oI0DJ7k2WfSqpCpDPMQAJ3ZO72PP8zuJsSfJnx1r8Dx3XYQOxuPl2OhsGiW57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.003988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004018 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.004053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004089 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.004180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--499bb3ba--5d36--55d4--9ab4--77fea8769c5a-osd--block--499bb3ba--5d36--55d4--9ab4--77fea8769c5a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TejcCD-UdZ2-c8zU-pqzM-8B6r-uMOu-IbZL3W', 'scsi-0QEMU_QEMU_HARDDISK_a7da52da-8ff9-443f-9c01-2997209c642a', 'scsi-SQEMU_QEMU_HARDDISK_a7da52da-8ff9-443f-9c01-2997209c642a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.004193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6-osd--block--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6', 'dm-uuid-LVM-FoG8G6pM9fdL9UmfNP40N67XYHhtV7O75sHctXcNSZ3xMwuxruSzQBMWTX3PJZ3g'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004204 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--482defc3--95b3--50a2--a4e9--5dea1f7a25a6-osd--block--482defc3--95b3--50a2--a4e9--5dea1f7a25a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Afv5wH-m8oE-CrJP-EkRU-7lo4-wmhy-8decif', 'scsi-0QEMU_QEMU_HARDDISK_2d05b72c-4493-4412-ad25-c0b6cbf3de12', 'scsi-SQEMU_QEMU_HARDDISK_2d05b72c-4493-4412-ad25-c0b6cbf3de12'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.004221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9f018b0b--9dc8--5104--9bc9--2c288294c8fd-osd--block--9f018b0b--9dc8--5104--9bc9--2c288294c8fd', 'dm-uuid-LVM-5r8hi0765R3tOEAFRn6eSUN63tC5cvhQCCRz4D05AtspdLxUkd72JgtEFGklhg06'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c', 'scsi-SQEMU_QEMU_HARDDISK_a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.004250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'ho2025-09-19 11:37:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:37:58.004281 | orchestrator | st': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.004293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004304 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.004315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004338 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 11:37:58.004405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part1', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part14', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part15', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part16', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.004418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6-osd--block--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P1hpgE-vTn7-lguI-7OdR-dZzc-V1cJ-ofZPGd', 'scsi-0QEMU_QEMU_HARDDISK_4ab3eba9-7f04-4545-b862-1d19a7d78b14', 'scsi-SQEMU_QEMU_HARDDISK_4ab3eba9-7f04-4545-b862-1d19a7d78b14'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.004434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9f018b0b--9dc8--5104--9bc9--2c288294c8fd-osd--block--9f018b0b--9dc8--5104--9bc9--2c288294c8fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-O1sv3G-jn1l-O4mD-DT5w-gp6W-Oe6c-ak2i7W', 'scsi-0QEMU_QEMU_HARDDISK_82c12b62-ffbd-484b-a107-b043e35ec15c', 'scsi-SQEMU_QEMU_HARDDISK_82c12b62-ffbd-484b-a107-b043e35ec15c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.004474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23c8bdec-2f7a-480a-98d1-592cee3b582b', 'scsi-SQEMU_QEMU_HARDDISK_23c8bdec-2f7a-480a-98d1-592cee3b582b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.004495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 11:37:58.004507 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.004517 | orchestrator | 2025-09-19 11:37:58.004529 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-19 11:37:58.004540 | orchestrator | Friday 19 September 2025 11:36:03 +0000 (0:00:00.581) 0:00:16.941 ****** 2025-09-19 11:37:58.004551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f2e5a9ae--16db--5885--a5f1--5293896cd0a9-osd--block--f2e5a9ae--16db--5885--a5f1--5293896cd0a9', 'dm-uuid-LVM-T0qtfsVXAM2pxgkSZHPOh8wOanAOcnyXtrQDNWKQpMdeLKVaBer12Y5MriBAgVYI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004563 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5-osd--block--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5', 'dm-uuid-LVM-u2rmXfbzi0TuTIdRJEkihfDRShJacu7nwni3ibQB2pd4SpbFkYjAfzf4Sfdt0x2c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004574 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004597 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004608 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004627 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004650 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004661 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004682 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part1', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part14', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part15', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part16', 'scsi-SQEMU_QEMU_HARDDISK_ee875cf9-0ab9-455c-b6ff-02f5d369ce10-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004718 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f2e5a9ae--16db--5885--a5f1--5293896cd0a9-osd--block--f2e5a9ae--16db--5885--a5f1--5293896cd0a9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BdEUeW-T1x2-3zEI-sGKj-LbaC-JGTN-0d2P5Z', 'scsi-0QEMU_QEMU_HARDDISK_729b54dd-f4c1-4a98-9e39-7aa2dbdf058c', 'scsi-SQEMU_QEMU_HARDDISK_729b54dd-f4c1-4a98-9e39-7aa2dbdf058c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004744 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--499bb3ba--5d36--55d4--9ab4--77fea8769c5a-osd--block--499bb3ba--5d36--55d4--9ab4--77fea8769c5a', 'dm-uuid-LVM-sKkYbBtPH7TYB3qfRwMoXcTlubcZTSnUwbyaQ36SqEI2lNR4qCbIkTanXU63GGfj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5-osd--block--d15bf0b7--095a--52ef--97a5--c7d3cf055ef5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-OD12ed-tnfe-q2vC-3MJo-XQuM-2lVY-yBnEkJ', 'scsi-0QEMU_QEMU_HARDDISK_ff354216-c1d2-4110-b9e3-f4cf06b21a62', 'scsi-SQEMU_QEMU_HARDDISK_ff354216-c1d2-4110-b9e3-f4cf06b21a62'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004775 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2859ea6e-5cf3-4595-8353-f67711d21d4e', 'scsi-SQEMU_QEMU_HARDDISK_2859ea6e-5cf3-4595-8353-f67711d21d4e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004788 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--482defc3--95b3--50a2--a4e9--5dea1f7a25a6-osd--block--482defc3--95b3--50a2--a4e9--5dea1f7a25a6', 'dm-uuid-LVM-Sl0oI0DJ7k2WfSqpCpDPMQAJ3ZO72PP8zuJsSfJnx1r8Dx3XYQOxuPl2OhsGiW57'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004816 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004835 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004847 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.004858 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004875 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004887 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004898 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004916 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004928 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6-osd--block--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6', 'dm-uuid-LVM-FoG8G6pM9fdL9UmfNP40N67XYHhtV7O75sHctXcNSZ3xMwuxruSzQBMWTX3PJZ3g'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.004957 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9f018b0b--9dc8--5104--9bc9--2c288294c8fd-osd--block--9f018b0b--9dc8--5104--9bc9--2c288294c8fd', 'dm-uuid-LVM-5r8hi0765R3tOEAFRn6eSUN63tC5cvhQCCRz4D05AtspdLxUkd72JgtEFGklhg06'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005009 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part1', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part14', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part15', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part16', 'scsi-SQEMU_QEMU_HARDDISK_0705e7c4-71e7-4335-94ae-66aba7e7deb2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005035 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005053 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--499bb3ba--5d36--55d4--9ab4--77fea8769c5a-osd--block--499bb3ba--5d36--55d4--9ab4--77fea8769c5a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TejcCD-UdZ2-c8zU-pqzM-8B6r-uMOu-IbZL3W', 'scsi-0QEMU_QEMU_HARDDISK_a7da52da-8ff9-443f-9c01-2997209c642a', 'scsi-SQEMU_QEMU_HARDDISK_a7da52da-8ff9-443f-9c01-2997209c642a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005065 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005077 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--482defc3--95b3--50a2--a4e9--5dea1f7a25a6-osd--block--482defc3--95b3--50a2--a4e9--5dea1f7a25a6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Afv5wH-m8oE-CrJP-EkRU-7lo4-wmhy-8decif', 'scsi-0QEMU_QEMU_HARDDISK_2d05b72c-4493-4412-ad25-c0b6cbf3de12', 'scsi-SQEMU_QEMU_HARDDISK_2d05b72c-4493-4412-ad25-c0b6cbf3de12'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005096 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005112 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c', 'scsi-SQEMU_QEMU_HARDDISK_a6332a85-bdda-4d26-8c8d-9b70f0aa8d7c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005140 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005152 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005163 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.005174 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005192 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005207 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005227 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part1', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part14', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part15', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part16', 'scsi-SQEMU_QEMU_HARDDISK_482f8994-f50e-4592-b361-7a4b29e22e2d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005240 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6-osd--block--4ec87955--83d4--5f81--a4e3--fa3184f5f6e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P1hpgE-vTn7-lguI-7OdR-dZzc-V1cJ-ofZPGd', 'scsi-0QEMU_QEMU_HARDDISK_4ab3eba9-7f04-4545-b862-1d19a7d78b14', 'scsi-SQEMU_QEMU_HARDDISK_4ab3eba9-7f04-4545-b862-1d19a7d78b14'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005263 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9f018b0b--9dc8--5104--9bc9--2c288294c8fd-osd--block--9f018b0b--9dc8--5104--9bc9--2c288294c8fd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-O1sv3G-jn1l-O4mD-DT5w-gp6W-Oe6c-ak2i7W', 'scsi-0QEMU_QEMU_HARDDISK_82c12b62-ffbd-484b-a107-b043e35ec15c', 'scsi-SQEMU_QEMU_HARDDISK_82c12b62-ffbd-484b-a107-b043e35ec15c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005275 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_23c8bdec-2f7a-480a-98d1-592cee3b582b', 'scsi-SQEMU_QEMU_HARDDISK_23c8bdec-2f7a-480a-98d1-592cee3b582b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005293 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-10-45-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 11:37:58.005305 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.005315 | orchestrator | 2025-09-19 11:37:58.005326 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-19 11:37:58.005338 | orchestrator | Friday 19 September 2025 11:36:03 +0000 (0:00:00.623) 0:00:17.565 ****** 2025-09-19 11:37:58.005349 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:58.005359 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.005370 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:58.005381 | orchestrator | 2025-09-19 11:37:58.005398 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-19 11:37:58.005408 | orchestrator | Friday 19 September 2025 11:36:04 +0000 (0:00:00.668) 0:00:18.234 ****** 2025-09-19 11:37:58.005419 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.005430 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:58.005440 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:58.005482 | orchestrator | 2025-09-19 11:37:58.005494 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 11:37:58.005505 | orchestrator | Friday 19 September 2025 11:36:04 +0000 (0:00:00.507) 0:00:18.741 ****** 2025-09-19 11:37:58.005515 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.005526 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:58.005536 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:58.005547 | orchestrator | 2025-09-19 11:37:58.005558 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 11:37:58.005568 | orchestrator | Friday 19 September 2025 11:36:05 +0000 (0:00:00.636) 0:00:19.378 ****** 2025-09-19 11:37:58.005579 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.005590 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.005601 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.005611 | orchestrator | 2025-09-19 11:37:58.005622 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 11:37:58.005633 | orchestrator | Friday 19 September 2025 11:36:05 +0000 (0:00:00.274) 0:00:19.652 ****** 2025-09-19 11:37:58.005643 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.005654 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.005665 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.005675 | orchestrator | 2025-09-19 11:37:58.005686 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 11:37:58.005697 | orchestrator | Friday 19 September 2025 11:36:06 +0000 (0:00:00.450) 0:00:20.103 ****** 2025-09-19 11:37:58.005707 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.005718 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.005729 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.005739 | orchestrator | 2025-09-19 11:37:58.005750 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-19 11:37:58.005761 | orchestrator | Friday 19 September 2025 11:36:06 +0000 (0:00:00.498) 0:00:20.602 ****** 2025-09-19 11:37:58.005772 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-19 11:37:58.005783 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-19 11:37:58.005794 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-19 11:37:58.005804 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-19 11:37:58.005815 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-19 11:37:58.005826 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-19 11:37:58.005836 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-19 11:37:58.005847 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-19 11:37:58.005857 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-19 11:37:58.005868 | orchestrator | 2025-09-19 11:37:58.005883 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-19 11:37:58.005894 | orchestrator | Friday 19 September 2025 11:36:07 +0000 (0:00:00.850) 0:00:21.452 ****** 2025-09-19 11:37:58.005905 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 11:37:58.005916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 11:37:58.005926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 11:37:58.005937 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.005947 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 11:37:58.005958 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 11:37:58.005968 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 11:37:58.005986 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.005996 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 11:37:58.006007 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 11:37:58.006074 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 11:37:58.006086 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.006097 | orchestrator | 2025-09-19 11:37:58.006108 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-19 11:37:58.006119 | orchestrator | Friday 19 September 2025 11:36:08 +0000 (0:00:00.345) 0:00:21.797 ****** 2025-09-19 11:37:58.006129 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:37:58.006140 | orchestrator | 2025-09-19 11:37:58.006151 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 11:37:58.006162 | orchestrator | Friday 19 September 2025 11:36:08 +0000 (0:00:00.733) 0:00:22.531 ****** 2025-09-19 11:37:58.006180 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.006191 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.006202 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.006212 | orchestrator | 2025-09-19 11:37:58.006223 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 11:37:58.006234 | orchestrator | Friday 19 September 2025 11:36:09 +0000 (0:00:00.360) 0:00:22.892 ****** 2025-09-19 11:37:58.006245 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.006255 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.006266 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.006277 | orchestrator | 2025-09-19 11:37:58.006288 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 11:37:58.006298 | orchestrator | Friday 19 September 2025 11:36:09 +0000 (0:00:00.302) 0:00:23.194 ****** 2025-09-19 11:37:58.006309 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.006320 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.006330 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:37:58.006341 | orchestrator | 2025-09-19 11:37:58.006351 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 11:37:58.006362 | orchestrator | Friday 19 September 2025 11:36:09 +0000 (0:00:00.314) 0:00:23.508 ****** 2025-09-19 11:37:58.006373 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.006384 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:58.006394 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:58.006405 | orchestrator | 2025-09-19 11:37:58.006415 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 11:37:58.006426 | orchestrator | Friday 19 September 2025 11:36:10 +0000 (0:00:00.603) 0:00:24.111 ****** 2025-09-19 11:37:58.006437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:37:58.006465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:37:58.006477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:37:58.006488 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.006498 | orchestrator | 2025-09-19 11:37:58.006509 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 11:37:58.006520 | orchestrator | Friday 19 September 2025 11:36:10 +0000 (0:00:00.368) 0:00:24.480 ****** 2025-09-19 11:37:58.006530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:37:58.006541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:37:58.006551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:37:58.006562 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.006572 | orchestrator | 2025-09-19 11:37:58.006583 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 11:37:58.006593 | orchestrator | Friday 19 September 2025 11:36:11 +0000 (0:00:00.366) 0:00:24.846 ****** 2025-09-19 11:37:58.006613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 11:37:58.006624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 11:37:58.006634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 11:37:58.006645 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.006655 | orchestrator | 2025-09-19 11:37:58.006666 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 11:37:58.006677 | orchestrator | Friday 19 September 2025 11:36:11 +0000 (0:00:00.359) 0:00:25.206 ****** 2025-09-19 11:37:58.006687 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:37:58.006698 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:37:58.006709 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:37:58.006719 | orchestrator | 2025-09-19 11:37:58.006730 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 11:37:58.006740 | orchestrator | Friday 19 September 2025 11:36:11 +0000 (0:00:00.308) 0:00:25.514 ****** 2025-09-19 11:37:58.006751 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 11:37:58.006762 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 11:37:58.006772 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 11:37:58.006783 | orchestrator | 2025-09-19 11:37:58.006793 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-19 11:37:58.006812 | orchestrator | Friday 19 September 2025 11:36:12 +0000 (0:00:00.514) 0:00:26.029 ****** 2025-09-19 11:37:58.006823 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:37:58.006834 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:37:58.006845 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:37:58.006855 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 11:37:58.006866 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 11:37:58.006877 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 11:37:58.006887 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 11:37:58.006898 | orchestrator | 2025-09-19 11:37:58.006909 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-19 11:37:58.006919 | orchestrator | Friday 19 September 2025 11:36:13 +0000 (0:00:00.999) 0:00:27.029 ****** 2025-09-19 11:37:58.006930 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 11:37:58.006941 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 11:37:58.006951 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 11:37:58.006962 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 11:37:58.006972 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 11:37:58.006983 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 11:37:58.006999 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 11:37:58.007010 | orchestrator | 2025-09-19 11:37:58.007021 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-19 11:37:58.007032 | orchestrator | Friday 19 September 2025 11:36:15 +0000 (0:00:02.029) 0:00:29.059 ****** 2025-09-19 11:37:58.007042 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:37:58.007053 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:37:58.007064 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-19 11:37:58.007074 | orchestrator | 2025-09-19 11:37:58.007085 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-19 11:37:58.007096 | orchestrator | Friday 19 September 2025 11:36:15 +0000 (0:00:00.354) 0:00:29.413 ****** 2025-09-19 11:37:58.007115 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:37:58.007127 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:37:58.007138 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:37:58.007149 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:37:58.007160 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 11:37:58.007171 | orchestrator | 2025-09-19 11:37:58.007182 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-19 11:37:58.007193 | orchestrator | Friday 19 September 2025 11:37:00 +0000 (0:00:44.920) 0:01:14.333 ****** 2025-09-19 11:37:58.007203 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007214 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007224 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007235 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007246 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007256 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007272 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-19 11:37:58.007282 | orchestrator | 2025-09-19 11:37:58.007293 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-19 11:37:58.007304 | orchestrator | Friday 19 September 2025 11:37:24 +0000 (0:00:24.329) 0:01:38.662 ****** 2025-09-19 11:37:58.007314 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007325 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007336 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007346 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007357 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007367 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007378 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 11:37:58.007388 | orchestrator | 2025-09-19 11:37:58.007399 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-19 11:37:58.007409 | orchestrator | Friday 19 September 2025 11:37:37 +0000 (0:00:12.088) 0:01:50.751 ****** 2025-09-19 11:37:58.007420 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007430 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:37:58.007467 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:37:58.007479 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007490 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:37:58.007507 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:37:58.007518 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007529 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:37:58.007540 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:37:58.007550 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007561 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:37:58.007572 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:37:58.007583 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007593 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:37:58.007604 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:37:58.007614 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 11:37:58.007625 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 11:37:58.007636 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 11:37:58.007647 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-19 11:37:58.007657 | orchestrator | 2025-09-19 11:37:58.007668 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:37:58.007679 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-19 11:37:58.007691 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 11:37:58.007702 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 11:37:58.007712 | orchestrator | 2025-09-19 11:37:58.007723 | orchestrator | 2025-09-19 11:37:58.007734 | orchestrator | 2025-09-19 11:37:58.007745 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:37:58.007755 | orchestrator | Friday 19 September 2025 11:37:54 +0000 (0:00:17.765) 0:02:08.517 ****** 2025-09-19 11:37:58.007766 | orchestrator | =============================================================================== 2025-09-19 11:37:58.007777 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.92s 2025-09-19 11:37:58.007787 | orchestrator | generate keys ---------------------------------------------------------- 24.33s 2025-09-19 11:37:58.007798 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.77s 2025-09-19 11:37:58.007809 | orchestrator | get keys from monitors ------------------------------------------------- 12.09s 2025-09-19 11:37:58.007820 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.25s 2025-09-19 11:37:58.007830 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.03s 2025-09-19 11:37:58.007841 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.75s 2025-09-19 11:37:58.007851 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.00s 2025-09-19 11:37:58.007862 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2025-09-19 11:37:58.007879 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.81s 2025-09-19 11:37:58.007896 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.78s 2025-09-19 11:37:58.007908 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2025-09-19 11:37:58.007918 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.69s 2025-09-19 11:37:58.007929 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.67s 2025-09-19 11:37:58.007939 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.64s 2025-09-19 11:37:58.007950 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2025-09-19 11:37:58.007961 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.63s 2025-09-19 11:37:58.007971 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.62s 2025-09-19 11:37:58.007982 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.60s 2025-09-19 11:37:58.007993 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.58s 2025-09-19 11:38:01.049124 | orchestrator | 2025-09-19 11:38:01 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:01.051967 | orchestrator | 2025-09-19 11:38:01 | INFO  | Task 730241fa-0b39-4581-92b2-1d95fb52e67b is in state STARTED 2025-09-19 11:38:01.053112 | orchestrator | 2025-09-19 11:38:01 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:01.053135 | orchestrator | 2025-09-19 11:38:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:04.095706 | orchestrator | 2025-09-19 11:38:04 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:04.096705 | orchestrator | 2025-09-19 11:38:04 | INFO  | Task 730241fa-0b39-4581-92b2-1d95fb52e67b is in state STARTED 2025-09-19 11:38:04.098305 | orchestrator | 2025-09-19 11:38:04 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:04.098431 | orchestrator | 2025-09-19 11:38:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:07.151389 | orchestrator | 2025-09-19 11:38:07 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:07.153303 | orchestrator | 2025-09-19 11:38:07 | INFO  | Task 730241fa-0b39-4581-92b2-1d95fb52e67b is in state STARTED 2025-09-19 11:38:07.154291 | orchestrator | 2025-09-19 11:38:07 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:07.154323 | orchestrator | 2025-09-19 11:38:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:10.196768 | orchestrator | 2025-09-19 11:38:10 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:10.197226 | orchestrator | 2025-09-19 11:38:10 | INFO  | Task 730241fa-0b39-4581-92b2-1d95fb52e67b is in state STARTED 2025-09-19 11:38:10.199529 | orchestrator | 2025-09-19 11:38:10 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:10.199563 | orchestrator | 2025-09-19 11:38:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:13.240316 | orchestrator | 2025-09-19 11:38:13 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:13.243349 | orchestrator | 2025-09-19 11:38:13 | INFO  | Task 730241fa-0b39-4581-92b2-1d95fb52e67b is in state STARTED 2025-09-19 11:38:13.244299 | orchestrator | 2025-09-19 11:38:13 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:13.244314 | orchestrator | 2025-09-19 11:38:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:16.294680 | orchestrator | 2025-09-19 11:38:16 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:16.295332 | orchestrator | 2025-09-19 11:38:16 | INFO  | Task 730241fa-0b39-4581-92b2-1d95fb52e67b is in state STARTED 2025-09-19 11:38:16.297023 | orchestrator | 2025-09-19 11:38:16 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:16.297274 | orchestrator | 2025-09-19 11:38:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:19.343270 | orchestrator | 2025-09-19 11:38:19 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:19.345053 | orchestrator | 2025-09-19 11:38:19 | INFO  | Task 730241fa-0b39-4581-92b2-1d95fb52e67b is in state STARTED 2025-09-19 11:38:19.347093 | orchestrator | 2025-09-19 11:38:19 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:19.347143 | orchestrator | 2025-09-19 11:38:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:22.389716 | orchestrator | 2025-09-19 11:38:22 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:22.391936 | orchestrator | 2025-09-19 11:38:22 | INFO  | Task 730241fa-0b39-4581-92b2-1d95fb52e67b is in state STARTED 2025-09-19 11:38:22.393860 | orchestrator | 2025-09-19 11:38:22 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:22.394264 | orchestrator | 2025-09-19 11:38:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:25.441255 | orchestrator | 2025-09-19 11:38:25 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:25.445661 | orchestrator | 2025-09-19 11:38:25 | INFO  | Task 730241fa-0b39-4581-92b2-1d95fb52e67b is in state STARTED 2025-09-19 11:38:25.447676 | orchestrator | 2025-09-19 11:38:25 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:25.447702 | orchestrator | 2025-09-19 11:38:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:28.499119 | orchestrator | 2025-09-19 11:38:28 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:28.502144 | orchestrator | 2025-09-19 11:38:28 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:38:28.502761 | orchestrator | 2025-09-19 11:38:28 | INFO  | Task 730241fa-0b39-4581-92b2-1d95fb52e67b is in state SUCCESS 2025-09-19 11:38:28.504143 | orchestrator | 2025-09-19 11:38:28 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:28.504166 | orchestrator | 2025-09-19 11:38:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:31.547980 | orchestrator | 2025-09-19 11:38:31 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:31.549280 | orchestrator | 2025-09-19 11:38:31 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:38:31.551765 | orchestrator | 2025-09-19 11:38:31 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:31.551874 | orchestrator | 2025-09-19 11:38:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:34.598944 | orchestrator | 2025-09-19 11:38:34 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:34.600764 | orchestrator | 2025-09-19 11:38:34 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:38:34.604392 | orchestrator | 2025-09-19 11:38:34 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:34.604479 | orchestrator | 2025-09-19 11:38:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:37.663149 | orchestrator | 2025-09-19 11:38:37 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:37.665788 | orchestrator | 2025-09-19 11:38:37 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:38:37.668095 | orchestrator | 2025-09-19 11:38:37 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:37.668628 | orchestrator | 2025-09-19 11:38:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:40.711717 | orchestrator | 2025-09-19 11:38:40 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:40.712077 | orchestrator | 2025-09-19 11:38:40 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:38:40.713623 | orchestrator | 2025-09-19 11:38:40 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:40.713714 | orchestrator | 2025-09-19 11:38:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:43.754582 | orchestrator | 2025-09-19 11:38:43 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:43.759126 | orchestrator | 2025-09-19 11:38:43 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:38:43.760896 | orchestrator | 2025-09-19 11:38:43 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:43.760924 | orchestrator | 2025-09-19 11:38:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:46.811216 | orchestrator | 2025-09-19 11:38:46 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:46.812530 | orchestrator | 2025-09-19 11:38:46 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:38:46.815488 | orchestrator | 2025-09-19 11:38:46 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:46.815535 | orchestrator | 2025-09-19 11:38:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:49.873210 | orchestrator | 2025-09-19 11:38:49 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:49.874837 | orchestrator | 2025-09-19 11:38:49 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:38:49.877765 | orchestrator | 2025-09-19 11:38:49 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:49.877812 | orchestrator | 2025-09-19 11:38:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:52.925940 | orchestrator | 2025-09-19 11:38:52 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:52.927547 | orchestrator | 2025-09-19 11:38:52 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:38:52.930940 | orchestrator | 2025-09-19 11:38:52 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:52.931022 | orchestrator | 2025-09-19 11:38:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:55.974937 | orchestrator | 2025-09-19 11:38:55 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:55.977843 | orchestrator | 2025-09-19 11:38:55 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:38:55.980094 | orchestrator | 2025-09-19 11:38:55 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:55.980132 | orchestrator | 2025-09-19 11:38:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:38:59.023298 | orchestrator | 2025-09-19 11:38:59 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:38:59.025602 | orchestrator | 2025-09-19 11:38:59 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:38:59.029009 | orchestrator | 2025-09-19 11:38:59 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:38:59.029043 | orchestrator | 2025-09-19 11:38:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:02.070091 | orchestrator | 2025-09-19 11:39:02 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:02.070868 | orchestrator | 2025-09-19 11:39:02 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:39:02.072329 | orchestrator | 2025-09-19 11:39:02 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:39:02.072352 | orchestrator | 2025-09-19 11:39:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:05.116843 | orchestrator | 2025-09-19 11:39:05 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:05.118792 | orchestrator | 2025-09-19 11:39:05 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:39:05.121055 | orchestrator | 2025-09-19 11:39:05 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state STARTED 2025-09-19 11:39:05.121080 | orchestrator | 2025-09-19 11:39:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:08.165825 | orchestrator | 2025-09-19 11:39:08 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:08.166111 | orchestrator | 2025-09-19 11:39:08 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:39:08.167934 | orchestrator | 2025-09-19 11:39:08 | INFO  | Task 485803b0-ba9a-4bdf-98ce-00d981a71cc1 is in state SUCCESS 2025-09-19 11:39:08.168024 | orchestrator | 2025-09-19 11:39:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:08.170071 | orchestrator | 2025-09-19 11:39:08.170115 | orchestrator | 2025-09-19 11:39:08.170128 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-19 11:39:08.170264 | orchestrator | 2025-09-19 11:39:08.170281 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-19 11:39:08.170293 | orchestrator | Friday 19 September 2025 11:37:59 +0000 (0:00:00.144) 0:00:00.144 ****** 2025-09-19 11:39:08.170305 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-19 11:39:08.170318 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 11:39:08.170329 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 11:39:08.170340 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 11:39:08.170351 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 11:39:08.170362 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-19 11:39:08.170670 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-19 11:39:08.170686 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-19 11:39:08.170697 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-19 11:39:08.170708 | orchestrator | 2025-09-19 11:39:08.170719 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-19 11:39:08.170730 | orchestrator | Friday 19 September 2025 11:38:03 +0000 (0:00:04.258) 0:00:04.403 ****** 2025-09-19 11:39:08.170742 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 11:39:08.170775 | orchestrator | 2025-09-19 11:39:08.170787 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-19 11:39:08.170798 | orchestrator | Friday 19 September 2025 11:38:04 +0000 (0:00:00.997) 0:00:05.400 ****** 2025-09-19 11:39:08.170809 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-19 11:39:08.170821 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 11:39:08.170832 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 11:39:08.170844 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 11:39:08.170855 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 11:39:08.170866 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-19 11:39:08.170878 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-19 11:39:08.170889 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-19 11:39:08.170900 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-19 11:39:08.170911 | orchestrator | 2025-09-19 11:39:08.170923 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-19 11:39:08.170933 | orchestrator | Friday 19 September 2025 11:38:18 +0000 (0:00:13.900) 0:00:19.301 ****** 2025-09-19 11:39:08.170945 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-19 11:39:08.170956 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 11:39:08.170967 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 11:39:08.170978 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 11:39:08.170989 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 11:39:08.171000 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-19 11:39:08.171011 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-19 11:39:08.171022 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-19 11:39:08.171033 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-19 11:39:08.171043 | orchestrator | 2025-09-19 11:39:08.171054 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:39:08.171065 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:39:08.171078 | orchestrator | 2025-09-19 11:39:08.171089 | orchestrator | 2025-09-19 11:39:08.171100 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:39:08.171111 | orchestrator | Friday 19 September 2025 11:38:25 +0000 (0:00:06.751) 0:00:26.052 ****** 2025-09-19 11:39:08.171122 | orchestrator | =============================================================================== 2025-09-19 11:39:08.171133 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.90s 2025-09-19 11:39:08.171143 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.75s 2025-09-19 11:39:08.171154 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.26s 2025-09-19 11:39:08.171165 | orchestrator | Create share directory -------------------------------------------------- 1.00s 2025-09-19 11:39:08.171176 | orchestrator | 2025-09-19 11:39:08.171187 | orchestrator | 2025-09-19 11:39:08.171198 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:39:08.171210 | orchestrator | 2025-09-19 11:39:08.171233 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:39:08.171245 | orchestrator | Friday 19 September 2025 11:37:19 +0000 (0:00:00.281) 0:00:00.282 ****** 2025-09-19 11:39:08.171256 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:39:08.171275 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:39:08.171286 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:39:08.171296 | orchestrator | 2025-09-19 11:39:08.171307 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:39:08.171319 | orchestrator | Friday 19 September 2025 11:37:19 +0000 (0:00:00.289) 0:00:00.571 ****** 2025-09-19 11:39:08.171330 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-19 11:39:08.171341 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-19 11:39:08.171352 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-19 11:39:08.171363 | orchestrator | 2025-09-19 11:39:08.171396 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-19 11:39:08.171407 | orchestrator | 2025-09-19 11:39:08.171419 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 11:39:08.171430 | orchestrator | Friday 19 September 2025 11:37:19 +0000 (0:00:00.400) 0:00:00.971 ****** 2025-09-19 11:39:08.171446 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:39:08.171457 | orchestrator | 2025-09-19 11:39:08.171468 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-19 11:39:08.171480 | orchestrator | Friday 19 September 2025 11:37:20 +0000 (0:00:00.495) 0:00:01.467 ****** 2025-09-19 11:39:08.171497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:39:08.171532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:39:08.171555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:39:08.171568 | orchestrator | 2025-09-19 11:39:08.171579 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-19 11:39:08.171596 | orchestrator | Friday 19 September 2025 11:37:21 +0000 (0:00:01.239) 0:00:02.706 ****** 2025-09-19 11:39:08.171607 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:39:08.171619 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:39:08.171630 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:39:08.171641 | orchestrator | 2025-09-19 11:39:08.171652 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 11:39:08.171663 | orchestrator | Friday 19 September 2025 11:37:22 +0000 (0:00:00.469) 0:00:03.176 ****** 2025-09-19 11:39:08.171674 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 11:39:08.171691 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 11:39:08.171703 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 11:39:08.171714 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 11:39:08.171725 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 11:39:08.171736 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 11:39:08.171747 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-19 11:39:08.171758 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 11:39:08.171769 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 11:39:08.171780 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 11:39:08.171791 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 11:39:08.171806 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 11:39:08.171818 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 11:39:08.171829 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 11:39:08.171840 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-19 11:39:08.171850 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 11:39:08.171862 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 11:39:08.171873 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 11:39:08.171884 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 11:39:08.171895 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 11:39:08.171905 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 11:39:08.171916 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 11:39:08.171927 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-19 11:39:08.171938 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 11:39:08.171951 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-19 11:39:08.171964 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-19 11:39:08.171975 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-19 11:39:08.171986 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-19 11:39:08.172004 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-19 11:39:08.172015 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-19 11:39:08.172026 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-19 11:39:08.172037 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-19 11:39:08.172048 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-19 11:39:08.172060 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-19 11:39:08.172071 | orchestrator | 2025-09-19 11:39:08.172082 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:39:08.172093 | orchestrator | Friday 19 September 2025 11:37:22 +0000 (0:00:00.753) 0:00:03.929 ****** 2025-09-19 11:39:08.172104 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:39:08.172115 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:39:08.172126 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:39:08.172137 | orchestrator | 2025-09-19 11:39:08.172148 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:39:08.172159 | orchestrator | Friday 19 September 2025 11:37:23 +0000 (0:00:00.303) 0:00:04.233 ****** 2025-09-19 11:39:08.172170 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.172181 | orchestrator | 2025-09-19 11:39:08.172198 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:39:08.172209 | orchestrator | Friday 19 September 2025 11:37:23 +0000 (0:00:00.135) 0:00:04.368 ****** 2025-09-19 11:39:08.172220 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.172231 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.172242 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.172253 | orchestrator | 2025-09-19 11:39:08.172264 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:39:08.172275 | orchestrator | Friday 19 September 2025 11:37:23 +0000 (0:00:00.485) 0:00:04.853 ****** 2025-09-19 11:39:08.172286 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:39:08.172297 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:39:08.172308 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:39:08.172319 | orchestrator | 2025-09-19 11:39:08.172330 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:39:08.172341 | orchestrator | Friday 19 September 2025 11:37:24 +0000 (0:00:00.316) 0:00:05.170 ****** 2025-09-19 11:39:08.172352 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.172363 | orchestrator | 2025-09-19 11:39:08.172391 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:39:08.172403 | orchestrator | Friday 19 September 2025 11:37:24 +0000 (0:00:00.131) 0:00:05.301 ****** 2025-09-19 11:39:08.172414 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.172430 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.172441 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.172451 | orchestrator | 2025-09-19 11:39:08.172463 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:39:08.172473 | orchestrator | Friday 19 September 2025 11:37:24 +0000 (0:00:00.306) 0:00:05.608 ****** 2025-09-19 11:39:08.172484 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:39:08.172496 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:39:08.172506 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:39:08.172517 | orchestrator | 2025-09-19 11:39:08.172528 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:39:08.172546 | orchestrator | Friday 19 September 2025 11:37:24 +0000 (0:00:00.302) 0:00:05.910 ****** 2025-09-19 11:39:08.172557 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.172567 | orchestrator | 2025-09-19 11:39:08.172578 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:39:08.172589 | orchestrator | Friday 19 September 2025 11:37:25 +0000 (0:00:00.381) 0:00:06.291 ****** 2025-09-19 11:39:08.172600 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.172611 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.172621 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.172632 | orchestrator | 2025-09-19 11:39:08.172643 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:39:08.172654 | orchestrator | Friday 19 September 2025 11:37:25 +0000 (0:00:00.297) 0:00:06.589 ****** 2025-09-19 11:39:08.172665 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:39:08.172676 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:39:08.172687 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:39:08.172697 | orchestrator | 2025-09-19 11:39:08.172708 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:39:08.172719 | orchestrator | Friday 19 September 2025 11:37:25 +0000 (0:00:00.337) 0:00:06.927 ****** 2025-09-19 11:39:08.172730 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.172741 | orchestrator | 2025-09-19 11:39:08.172751 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:39:08.172762 | orchestrator | Friday 19 September 2025 11:37:25 +0000 (0:00:00.121) 0:00:07.048 ****** 2025-09-19 11:39:08.172773 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.172784 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.172795 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.172806 | orchestrator | 2025-09-19 11:39:08.172817 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:39:08.172828 | orchestrator | Friday 19 September 2025 11:37:26 +0000 (0:00:00.299) 0:00:07.347 ****** 2025-09-19 11:39:08.172838 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:39:08.172849 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:39:08.172860 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:39:08.172871 | orchestrator | 2025-09-19 11:39:08.172882 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:39:08.172893 | orchestrator | Friday 19 September 2025 11:37:26 +0000 (0:00:00.514) 0:00:07.862 ****** 2025-09-19 11:39:08.172904 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.172914 | orchestrator | 2025-09-19 11:39:08.172925 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:39:08.172936 | orchestrator | Friday 19 September 2025 11:37:26 +0000 (0:00:00.132) 0:00:07.994 ****** 2025-09-19 11:39:08.172947 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.172958 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.172968 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.172979 | orchestrator | 2025-09-19 11:39:08.172990 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:39:08.173001 | orchestrator | Friday 19 September 2025 11:37:27 +0000 (0:00:00.328) 0:00:08.323 ****** 2025-09-19 11:39:08.173012 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:39:08.173023 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:39:08.173034 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:39:08.173044 | orchestrator | 2025-09-19 11:39:08.173056 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:39:08.173067 | orchestrator | Friday 19 September 2025 11:37:27 +0000 (0:00:00.316) 0:00:08.640 ****** 2025-09-19 11:39:08.173077 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.173088 | orchestrator | 2025-09-19 11:39:08.173099 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:39:08.173110 | orchestrator | Friday 19 September 2025 11:37:27 +0000 (0:00:00.119) 0:00:08.759 ****** 2025-09-19 11:39:08.173121 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.173142 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.173153 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.173163 | orchestrator | 2025-09-19 11:39:08.173174 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:39:08.173185 | orchestrator | Friday 19 September 2025 11:37:28 +0000 (0:00:00.502) 0:00:09.261 ****** 2025-09-19 11:39:08.173196 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:39:08.173213 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:39:08.173225 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:39:08.173235 | orchestrator | 2025-09-19 11:39:08.173246 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:39:08.173257 | orchestrator | Friday 19 September 2025 11:37:28 +0000 (0:00:00.326) 0:00:09.587 ****** 2025-09-19 11:39:08.173268 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.173279 | orchestrator | 2025-09-19 11:39:08.173290 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:39:08.173301 | orchestrator | Friday 19 September 2025 11:37:28 +0000 (0:00:00.139) 0:00:09.727 ****** 2025-09-19 11:39:08.173311 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.173322 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.173333 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.173344 | orchestrator | 2025-09-19 11:39:08.173355 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:39:08.173366 | orchestrator | Friday 19 September 2025 11:37:28 +0000 (0:00:00.307) 0:00:10.034 ****** 2025-09-19 11:39:08.173406 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:39:08.173418 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:39:08.173429 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:39:08.173439 | orchestrator | 2025-09-19 11:39:08.173451 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:39:08.173476 | orchestrator | Friday 19 September 2025 11:37:29 +0000 (0:00:00.314) 0:00:10.349 ****** 2025-09-19 11:39:08.173487 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.173498 | orchestrator | 2025-09-19 11:39:08.173509 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:39:08.173520 | orchestrator | Friday 19 September 2025 11:37:29 +0000 (0:00:00.137) 0:00:10.486 ****** 2025-09-19 11:39:08.173531 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.173542 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.173552 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.173563 | orchestrator | 2025-09-19 11:39:08.173574 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:39:08.173585 | orchestrator | Friday 19 September 2025 11:37:29 +0000 (0:00:00.467) 0:00:10.954 ****** 2025-09-19 11:39:08.173596 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:39:08.173607 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:39:08.173618 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:39:08.173629 | orchestrator | 2025-09-19 11:39:08.173640 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:39:08.173651 | orchestrator | Friday 19 September 2025 11:37:30 +0000 (0:00:00.318) 0:00:11.272 ****** 2025-09-19 11:39:08.173662 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.173673 | orchestrator | 2025-09-19 11:39:08.173684 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:39:08.173695 | orchestrator | Friday 19 September 2025 11:37:30 +0000 (0:00:00.146) 0:00:11.419 ****** 2025-09-19 11:39:08.173706 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.173716 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.173727 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.173738 | orchestrator | 2025-09-19 11:39:08.173749 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 11:39:08.173760 | orchestrator | Friday 19 September 2025 11:37:30 +0000 (0:00:00.304) 0:00:11.723 ****** 2025-09-19 11:39:08.173771 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:39:08.173789 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:39:08.173800 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:39:08.173811 | orchestrator | 2025-09-19 11:39:08.173822 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 11:39:08.173833 | orchestrator | Friday 19 September 2025 11:37:31 +0000 (0:00:00.501) 0:00:12.225 ****** 2025-09-19 11:39:08.173844 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.173854 | orchestrator | 2025-09-19 11:39:08.173866 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 11:39:08.173877 | orchestrator | Friday 19 September 2025 11:37:31 +0000 (0:00:00.119) 0:00:12.345 ****** 2025-09-19 11:39:08.173887 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.173898 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.173909 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.173920 | orchestrator | 2025-09-19 11:39:08.173931 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-19 11:39:08.173942 | orchestrator | Friday 19 September 2025 11:37:31 +0000 (0:00:00.268) 0:00:12.613 ****** 2025-09-19 11:39:08.173953 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:39:08.173963 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:39:08.173974 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:39:08.173985 | orchestrator | 2025-09-19 11:39:08.173996 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-19 11:39:08.174007 | orchestrator | Friday 19 September 2025 11:37:33 +0000 (0:00:01.717) 0:00:14.330 ****** 2025-09-19 11:39:08.174055 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 11:39:08.174069 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 11:39:08.174080 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 11:39:08.174091 | orchestrator | 2025-09-19 11:39:08.174102 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-19 11:39:08.174113 | orchestrator | Friday 19 September 2025 11:37:35 +0000 (0:00:02.050) 0:00:16.380 ****** 2025-09-19 11:39:08.174124 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 11:39:08.174135 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 11:39:08.174146 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 11:39:08.174157 | orchestrator | 2025-09-19 11:39:08.174168 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-19 11:39:08.174186 | orchestrator | Friday 19 September 2025 11:37:37 +0000 (0:00:02.427) 0:00:18.808 ****** 2025-09-19 11:39:08.174198 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 11:39:08.174209 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 11:39:08.174220 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 11:39:08.174230 | orchestrator | 2025-09-19 11:39:08.174242 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-19 11:39:08.174253 | orchestrator | Friday 19 September 2025 11:37:39 +0000 (0:00:01.588) 0:00:20.396 ****** 2025-09-19 11:39:08.174263 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.174275 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.174286 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.174297 | orchestrator | 2025-09-19 11:39:08.174307 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-19 11:39:08.174319 | orchestrator | Friday 19 September 2025 11:37:39 +0000 (0:00:00.301) 0:00:20.697 ****** 2025-09-19 11:39:08.174329 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.174340 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.174364 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.174396 | orchestrator | 2025-09-19 11:39:08.174407 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 11:39:08.174418 | orchestrator | Friday 19 September 2025 11:37:39 +0000 (0:00:00.286) 0:00:20.984 ****** 2025-09-19 11:39:08.174429 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:39:08.174440 | orchestrator | 2025-09-19 11:39:08.174452 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-19 11:39:08.174462 | orchestrator | Friday 19 September 2025 11:37:40 +0000 (0:00:00.828) 0:00:21.812 ****** 2025-09-19 11:39:08.174475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:39:08.174504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:39:08.174538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:39:08.174551 | orchestrator | 2025-09-19 11:39:08.174562 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-19 11:39:08.174573 | orchestrator | Friday 19 September 2025 11:37:42 +0000 (0:00:01.628) 0:00:23.441 ****** 2025-09-19 11:39:08.174598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:39:08.174620 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.174639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:39:08.174651 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.174669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:39:08.174691 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.174702 | orchestrator | 2025-09-19 11:39:08.174713 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-19 11:39:08.174724 | orchestrator | Friday 19 September 2025 11:37:42 +0000 (0:00:00.623) 0:00:24.065 ****** 2025-09-19 11:39:08.174744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:39:08.174766 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.174783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:39:08.174795 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.174820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 11:39:08.174840 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.174851 | orchestrator | 2025-09-19 11:39:08.174862 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-19 11:39:08.174873 | orchestrator | Friday 19 September 2025 11:37:44 +0000 (0:00:01.360) 0:00:25.425 ****** 2025-09-19 11:39:08.174885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:39:08.174911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:39:08.174936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 11:39:08.174948 | orchestrator | 2025-09-19 11:39:08.174960 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 11:39:08.174971 | orchestrator | Friday 19 September 2025 11:37:45 +0000 (0:00:01.566) 0:00:26.992 ****** 2025-09-19 11:39:08.174982 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:39:08.174993 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:39:08.175011 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:39:08.175022 | orchestrator | 2025-09-19 11:39:08.175033 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 11:39:08.175044 | orchestrator | Friday 19 September 2025 11:37:46 +0000 (0:00:00.294) 0:00:27.287 ****** 2025-09-19 11:39:08.175060 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:39:08.175072 | orchestrator | 2025-09-19 11:39:08.175083 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-19 11:39:08.175094 | orchestrator | Friday 19 September 2025 11:37:46 +0000 (0:00:00.757) 0:00:28.045 ****** 2025-09-19 11:39:08.175105 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:39:08.175116 | orchestrator | 2025-09-19 11:39:08.175127 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-19 11:39:08.175138 | orchestrator | Friday 19 September 2025 11:37:49 +0000 (0:00:02.232) 0:00:30.277 ****** 2025-09-19 11:39:08.175148 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:39:08.175159 | orchestrator | 2025-09-19 11:39:08.175170 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-19 11:39:08.175181 | orchestrator | Friday 19 September 2025 11:37:51 +0000 (0:00:02.339) 0:00:32.617 ****** 2025-09-19 11:39:08.175192 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:39:08.175203 | orchestrator | 2025-09-19 11:39:08.175214 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 11:39:08.175224 | orchestrator | Friday 19 September 2025 11:38:07 +0000 (0:00:16.238) 0:00:48.855 ****** 2025-09-19 11:39:08.175235 | orchestrator | 2025-09-19 11:39:08.175246 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 11:39:08.175262 | orchestrator | Friday 19 September 2025 11:38:07 +0000 (0:00:00.069) 0:00:48.925 ****** 2025-09-19 11:39:08.175273 | orchestrator | 2025-09-19 11:39:08.175284 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 11:39:08.175295 | orchestrator | Friday 19 September 2025 11:38:07 +0000 (0:00:00.066) 0:00:48.991 ****** 2025-09-19 11:39:08.175306 | orchestrator | 2025-09-19 11:39:08.175317 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-19 11:39:08.175328 | orchestrator | Friday 19 September 2025 11:38:07 +0000 (0:00:00.065) 0:00:49.057 ****** 2025-09-19 11:39:08.175338 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:39:08.175349 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:39:08.175360 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:39:08.175391 | orchestrator | 2025-09-19 11:39:08.175402 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:39:08.175413 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-19 11:39:08.175424 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-19 11:39:08.175435 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-19 11:39:08.175446 | orchestrator | 2025-09-19 11:39:08.175457 | orchestrator | 2025-09-19 11:39:08.175468 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:39:08.175479 | orchestrator | Friday 19 September 2025 11:39:05 +0000 (0:00:57.374) 0:01:46.432 ****** 2025-09-19 11:39:08.175490 | orchestrator | =============================================================================== 2025-09-19 11:39:08.175501 | orchestrator | horizon : Restart horizon container ------------------------------------ 57.37s 2025-09-19 11:39:08.175512 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.24s 2025-09-19 11:39:08.175524 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.43s 2025-09-19 11:39:08.175535 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.34s 2025-09-19 11:39:08.175552 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.23s 2025-09-19 11:39:08.175563 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.05s 2025-09-19 11:39:08.175574 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.72s 2025-09-19 11:39:08.175585 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.63s 2025-09-19 11:39:08.175596 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.59s 2025-09-19 11:39:08.175607 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.57s 2025-09-19 11:39:08.175617 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.36s 2025-09-19 11:39:08.175629 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.24s 2025-09-19 11:39:08.175640 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2025-09-19 11:39:08.175650 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2025-09-19 11:39:08.175661 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-09-19 11:39:08.175672 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.62s 2025-09-19 11:39:08.175683 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-09-19 11:39:08.175694 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2025-09-19 11:39:08.175705 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-09-19 11:39:08.175716 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2025-09-19 11:39:11.218848 | orchestrator | 2025-09-19 11:39:11 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:11.222563 | orchestrator | 2025-09-19 11:39:11 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:39:11.222597 | orchestrator | 2025-09-19 11:39:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:14.263738 | orchestrator | 2025-09-19 11:39:14 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:14.264030 | orchestrator | 2025-09-19 11:39:14 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:39:14.264096 | orchestrator | 2025-09-19 11:39:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:17.299056 | orchestrator | 2025-09-19 11:39:17 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:17.300580 | orchestrator | 2025-09-19 11:39:17 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:39:17.300610 | orchestrator | 2025-09-19 11:39:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:20.350184 | orchestrator | 2025-09-19 11:39:20 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:20.351524 | orchestrator | 2025-09-19 11:39:20 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:39:20.351563 | orchestrator | 2025-09-19 11:39:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:23.394968 | orchestrator | 2025-09-19 11:39:23 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:23.397171 | orchestrator | 2025-09-19 11:39:23 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state STARTED 2025-09-19 11:39:23.397242 | orchestrator | 2025-09-19 11:39:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:26.458664 | orchestrator | 2025-09-19 11:39:26 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:26.462963 | orchestrator | 2025-09-19 11:39:26 | INFO  | Task b53c54eb-72c4-4bbc-80f2-25b87ad8823c is in state SUCCESS 2025-09-19 11:39:26.465854 | orchestrator | 2025-09-19 11:39:26 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:39:26.468763 | orchestrator | 2025-09-19 11:39:26 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:39:26.470490 | orchestrator | 2025-09-19 11:39:26 | INFO  | Task 0274457c-2902-4f71-817d-1a51d8ebb7d6 is in state STARTED 2025-09-19 11:39:26.470871 | orchestrator | 2025-09-19 11:39:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:29.519716 | orchestrator | 2025-09-19 11:39:29 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:29.519904 | orchestrator | 2025-09-19 11:39:29 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:39:29.520841 | orchestrator | 2025-09-19 11:39:29 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:39:29.521845 | orchestrator | 2025-09-19 11:39:29 | INFO  | Task 0274457c-2902-4f71-817d-1a51d8ebb7d6 is in state STARTED 2025-09-19 11:39:29.521874 | orchestrator | 2025-09-19 11:39:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:32.568713 | orchestrator | 2025-09-19 11:39:32 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:32.568807 | orchestrator | 2025-09-19 11:39:32 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:39:32.568821 | orchestrator | 2025-09-19 11:39:32 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:39:32.568832 | orchestrator | 2025-09-19 11:39:32 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:39:32.568843 | orchestrator | 2025-09-19 11:39:32 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:39:32.568854 | orchestrator | 2025-09-19 11:39:32 | INFO  | Task 0274457c-2902-4f71-817d-1a51d8ebb7d6 is in state SUCCESS 2025-09-19 11:39:32.568865 | orchestrator | 2025-09-19 11:39:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:35.638476 | orchestrator | 2025-09-19 11:39:35 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:35.638529 | orchestrator | 2025-09-19 11:39:35 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:39:35.638537 | orchestrator | 2025-09-19 11:39:35 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:39:35.638543 | orchestrator | 2025-09-19 11:39:35 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:39:35.638550 | orchestrator | 2025-09-19 11:39:35 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:39:35.638556 | orchestrator | 2025-09-19 11:39:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:38.666894 | orchestrator | 2025-09-19 11:39:38 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:38.667292 | orchestrator | 2025-09-19 11:39:38 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:39:38.668018 | orchestrator | 2025-09-19 11:39:38 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:39:38.668866 | orchestrator | 2025-09-19 11:39:38 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:39:38.669503 | orchestrator | 2025-09-19 11:39:38 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:39:38.669859 | orchestrator | 2025-09-19 11:39:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:41.706599 | orchestrator | 2025-09-19 11:39:41 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:41.706891 | orchestrator | 2025-09-19 11:39:41 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:39:41.708206 | orchestrator | 2025-09-19 11:39:41 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:39:41.709489 | orchestrator | 2025-09-19 11:39:41 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:39:41.710268 | orchestrator | 2025-09-19 11:39:41 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:39:41.711096 | orchestrator | 2025-09-19 11:39:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:44.759446 | orchestrator | 2025-09-19 11:39:44 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:44.761569 | orchestrator | 2025-09-19 11:39:44 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:39:44.764016 | orchestrator | 2025-09-19 11:39:44 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:39:44.766238 | orchestrator | 2025-09-19 11:39:44 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:39:44.768262 | orchestrator | 2025-09-19 11:39:44 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:39:44.768498 | orchestrator | 2025-09-19 11:39:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:47.819542 | orchestrator | 2025-09-19 11:39:47 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:47.819960 | orchestrator | 2025-09-19 11:39:47 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:39:47.820813 | orchestrator | 2025-09-19 11:39:47 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:39:47.821883 | orchestrator | 2025-09-19 11:39:47 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:39:47.822594 | orchestrator | 2025-09-19 11:39:47 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:39:47.822710 | orchestrator | 2025-09-19 11:39:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:50.856590 | orchestrator | 2025-09-19 11:39:50 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:50.858645 | orchestrator | 2025-09-19 11:39:50 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:39:50.860262 | orchestrator | 2025-09-19 11:39:50 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:39:50.862123 | orchestrator | 2025-09-19 11:39:50 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:39:50.863636 | orchestrator | 2025-09-19 11:39:50 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:39:50.863705 | orchestrator | 2025-09-19 11:39:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:53.904916 | orchestrator | 2025-09-19 11:39:53 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:53.906887 | orchestrator | 2025-09-19 11:39:53 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:39:53.907795 | orchestrator | 2025-09-19 11:39:53 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:39:53.909458 | orchestrator | 2025-09-19 11:39:53 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:39:53.910534 | orchestrator | 2025-09-19 11:39:53 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:39:53.910565 | orchestrator | 2025-09-19 11:39:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:56.949014 | orchestrator | 2025-09-19 11:39:56 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:56.949115 | orchestrator | 2025-09-19 11:39:56 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:39:56.949794 | orchestrator | 2025-09-19 11:39:56 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:39:56.950558 | orchestrator | 2025-09-19 11:39:56 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:39:56.951293 | orchestrator | 2025-09-19 11:39:56 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:39:56.951404 | orchestrator | 2025-09-19 11:39:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:39:59.985423 | orchestrator | 2025-09-19 11:39:59 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:39:59.985594 | orchestrator | 2025-09-19 11:39:59 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:39:59.985613 | orchestrator | 2025-09-19 11:39:59 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:39:59.985636 | orchestrator | 2025-09-19 11:39:59 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:39:59.986107 | orchestrator | 2025-09-19 11:39:59 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:39:59.986130 | orchestrator | 2025-09-19 11:39:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:03.013991 | orchestrator | 2025-09-19 11:40:03 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state STARTED 2025-09-19 11:40:03.016240 | orchestrator | 2025-09-19 11:40:03 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:03.016595 | orchestrator | 2025-09-19 11:40:03 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:03.017242 | orchestrator | 2025-09-19 11:40:03 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:40:03.017628 | orchestrator | 2025-09-19 11:40:03 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:03.017651 | orchestrator | 2025-09-19 11:40:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:06.043170 | orchestrator | 2025-09-19 11:40:06 | INFO  | Task e25c588c-2d42-4434-8ea1-b652d099b28e is in state SUCCESS 2025-09-19 11:40:06.044231 | orchestrator | 2025-09-19 11:40:06.044265 | orchestrator | 2025-09-19 11:40:06.044278 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-19 11:40:06.044290 | orchestrator | 2025-09-19 11:40:06.044301 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-19 11:40:06.044312 | orchestrator | Friday 19 September 2025 11:38:29 +0000 (0:00:00.215) 0:00:00.215 ****** 2025-09-19 11:40:06.044358 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-19 11:40:06.044370 | orchestrator | 2025-09-19 11:40:06.044382 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-19 11:40:06.044392 | orchestrator | Friday 19 September 2025 11:38:29 +0000 (0:00:00.231) 0:00:00.447 ****** 2025-09-19 11:40:06.044403 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-19 11:40:06.044514 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-19 11:40:06.045137 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-19 11:40:06.045153 | orchestrator | 2025-09-19 11:40:06.045164 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-19 11:40:06.045175 | orchestrator | Friday 19 September 2025 11:38:30 +0000 (0:00:01.169) 0:00:01.616 ****** 2025-09-19 11:40:06.045186 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-19 11:40:06.045197 | orchestrator | 2025-09-19 11:40:06.045208 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-19 11:40:06.045219 | orchestrator | Friday 19 September 2025 11:38:31 +0000 (0:00:01.149) 0:00:02.766 ****** 2025-09-19 11:40:06.045229 | orchestrator | changed: [testbed-manager] 2025-09-19 11:40:06.045240 | orchestrator | 2025-09-19 11:40:06.045251 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-19 11:40:06.045262 | orchestrator | Friday 19 September 2025 11:38:32 +0000 (0:00:01.016) 0:00:03.782 ****** 2025-09-19 11:40:06.045273 | orchestrator | changed: [testbed-manager] 2025-09-19 11:40:06.045283 | orchestrator | 2025-09-19 11:40:06.045294 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-19 11:40:06.045305 | orchestrator | Friday 19 September 2025 11:38:33 +0000 (0:00:00.914) 0:00:04.697 ****** 2025-09-19 11:40:06.045337 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-19 11:40:06.045349 | orchestrator | ok: [testbed-manager] 2025-09-19 11:40:06.045359 | orchestrator | 2025-09-19 11:40:06.045370 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-19 11:40:06.045381 | orchestrator | Friday 19 September 2025 11:39:15 +0000 (0:00:41.421) 0:00:46.118 ****** 2025-09-19 11:40:06.045392 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-19 11:40:06.045403 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-19 11:40:06.045414 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-19 11:40:06.045424 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-19 11:40:06.045435 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-19 11:40:06.045445 | orchestrator | 2025-09-19 11:40:06.045545 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-19 11:40:06.045557 | orchestrator | Friday 19 September 2025 11:39:18 +0000 (0:00:03.885) 0:00:50.003 ****** 2025-09-19 11:40:06.045568 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-19 11:40:06.045579 | orchestrator | 2025-09-19 11:40:06.045590 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-19 11:40:06.045601 | orchestrator | Friday 19 September 2025 11:39:19 +0000 (0:00:00.531) 0:00:50.535 ****** 2025-09-19 11:40:06.045612 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:40:06.045623 | orchestrator | 2025-09-19 11:40:06.045647 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-19 11:40:06.045658 | orchestrator | Friday 19 September 2025 11:39:19 +0000 (0:00:00.149) 0:00:50.685 ****** 2025-09-19 11:40:06.045669 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:40:06.045680 | orchestrator | 2025-09-19 11:40:06.045691 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-19 11:40:06.045702 | orchestrator | Friday 19 September 2025 11:39:19 +0000 (0:00:00.292) 0:00:50.978 ****** 2025-09-19 11:40:06.045712 | orchestrator | changed: [testbed-manager] 2025-09-19 11:40:06.045723 | orchestrator | 2025-09-19 11:40:06.045734 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-19 11:40:06.045744 | orchestrator | Friday 19 September 2025 11:39:21 +0000 (0:00:01.701) 0:00:52.679 ****** 2025-09-19 11:40:06.045755 | orchestrator | changed: [testbed-manager] 2025-09-19 11:40:06.045766 | orchestrator | 2025-09-19 11:40:06.045777 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-19 11:40:06.045787 | orchestrator | Friday 19 September 2025 11:39:22 +0000 (0:00:00.795) 0:00:53.475 ****** 2025-09-19 11:40:06.045807 | orchestrator | changed: [testbed-manager] 2025-09-19 11:40:06.045818 | orchestrator | 2025-09-19 11:40:06.045829 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-19 11:40:06.045839 | orchestrator | Friday 19 September 2025 11:39:23 +0000 (0:00:00.613) 0:00:54.088 ****** 2025-09-19 11:40:06.045850 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-19 11:40:06.045861 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-19 11:40:06.045872 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-19 11:40:06.045882 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-19 11:40:06.045893 | orchestrator | 2025-09-19 11:40:06.045904 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:40:06.045915 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:40:06.045927 | orchestrator | 2025-09-19 11:40:06.045938 | orchestrator | 2025-09-19 11:40:06.045986 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:40:06.045999 | orchestrator | Friday 19 September 2025 11:39:24 +0000 (0:00:01.472) 0:00:55.560 ****** 2025-09-19 11:40:06.046010 | orchestrator | =============================================================================== 2025-09-19 11:40:06.046071 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.42s 2025-09-19 11:40:06.046083 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.89s 2025-09-19 11:40:06.046094 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.70s 2025-09-19 11:40:06.046105 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.47s 2025-09-19 11:40:06.046116 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.17s 2025-09-19 11:40:06.046127 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.15s 2025-09-19 11:40:06.046138 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.02s 2025-09-19 11:40:06.046148 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2025-09-19 11:40:06.046159 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.80s 2025-09-19 11:40:06.046170 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2025-09-19 11:40:06.046183 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.53s 2025-09-19 11:40:06.046195 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-09-19 11:40:06.046208 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-09-19 11:40:06.046220 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2025-09-19 11:40:06.046233 | orchestrator | 2025-09-19 11:40:06.046246 | orchestrator | 2025-09-19 11:40:06.046259 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:40:06.046271 | orchestrator | 2025-09-19 11:40:06.046284 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:40:06.046296 | orchestrator | Friday 19 September 2025 11:39:28 +0000 (0:00:00.176) 0:00:00.176 ****** 2025-09-19 11:40:06.046309 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:06.046346 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:06.046359 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:06.046370 | orchestrator | 2025-09-19 11:40:06.046381 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:40:06.046392 | orchestrator | Friday 19 September 2025 11:39:29 +0000 (0:00:00.299) 0:00:00.476 ****** 2025-09-19 11:40:06.046403 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-19 11:40:06.046414 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-19 11:40:06.046425 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-19 11:40:06.046435 | orchestrator | 2025-09-19 11:40:06.046454 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-19 11:40:06.046465 | orchestrator | 2025-09-19 11:40:06.046475 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-19 11:40:06.046486 | orchestrator | Friday 19 September 2025 11:39:29 +0000 (0:00:00.699) 0:00:01.175 ****** 2025-09-19 11:40:06.046497 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:06.046508 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:06.046601 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:06.046613 | orchestrator | 2025-09-19 11:40:06.046624 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:40:06.046636 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:40:06.046654 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:40:06.046666 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:40:06.046676 | orchestrator | 2025-09-19 11:40:06.046687 | orchestrator | 2025-09-19 11:40:06.046698 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:40:06.046709 | orchestrator | Friday 19 September 2025 11:39:30 +0000 (0:00:00.750) 0:00:01.926 ****** 2025-09-19 11:40:06.046720 | orchestrator | =============================================================================== 2025-09-19 11:40:06.046730 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.75s 2025-09-19 11:40:06.046741 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-09-19 11:40:06.046751 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-19 11:40:06.046762 | orchestrator | 2025-09-19 11:40:06.046772 | orchestrator | 2025-09-19 11:40:06.046783 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:40:06.046794 | orchestrator | 2025-09-19 11:40:06.046805 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:40:06.046815 | orchestrator | Friday 19 September 2025 11:37:19 +0000 (0:00:00.271) 0:00:00.271 ****** 2025-09-19 11:40:06.046826 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:06.046836 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:06.046847 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:06.046858 | orchestrator | 2025-09-19 11:40:06.046869 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:40:06.046879 | orchestrator | Friday 19 September 2025 11:37:19 +0000 (0:00:00.318) 0:00:00.589 ****** 2025-09-19 11:40:06.046890 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-19 11:40:06.046901 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-19 11:40:06.046911 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-19 11:40:06.046922 | orchestrator | 2025-09-19 11:40:06.046933 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-19 11:40:06.046944 | orchestrator | 2025-09-19 11:40:06.046986 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 11:40:06.046999 | orchestrator | Friday 19 September 2025 11:37:19 +0000 (0:00:00.426) 0:00:01.016 ****** 2025-09-19 11:40:06.047010 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:06.047021 | orchestrator | 2025-09-19 11:40:06.047032 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-19 11:40:06.047042 | orchestrator | Friday 19 September 2025 11:37:20 +0000 (0:00:00.568) 0:00:01.584 ****** 2025-09-19 11:40:06.047058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.047083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.047101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.047143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:40:06.047158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:40:06.047176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:40:06.047187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.047199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.047214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.047226 | orchestrator | 2025-09-19 11:40:06.047237 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-19 11:40:06.047248 | orchestrator | Friday 19 September 2025 11:37:22 +0000 (0:00:01.702) 0:00:03.287 ****** 2025-09-19 11:40:06.047259 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-19 11:40:06.047270 | orchestrator | 2025-09-19 11:40:06.047281 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-19 11:40:06.047292 | orchestrator | Friday 19 September 2025 11:37:22 +0000 (0:00:00.835) 0:00:04.122 ****** 2025-09-19 11:40:06.047302 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:06.047366 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:06.047380 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:06.047391 | orchestrator | 2025-09-19 11:40:06.047402 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-19 11:40:06.047413 | orchestrator | Friday 19 September 2025 11:37:23 +0000 (0:00:00.448) 0:00:04.571 ****** 2025-09-19 11:40:06.047424 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:40:06.047434 | orchestrator | 2025-09-19 11:40:06.047446 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 11:40:06.047462 | orchestrator | Friday 19 September 2025 11:37:24 +0000 (0:00:00.693) 0:00:05.265 ****** 2025-09-19 11:40:06.047481 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:06.047492 | orchestrator | 2025-09-19 11:40:06.047503 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-19 11:40:06.047513 | orchestrator | Friday 19 September 2025 11:37:24 +0000 (0:00:00.520) 0:00:05.786 ****** 2025-09-19 11:40:06.047526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.047538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.047556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.047568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:40:06.047595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:40:06.047607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:40:06.047619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.047630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.047739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.047754 | orchestrator | 2025-09-19 11:40:06.047765 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-19 11:40:06.047777 | orchestrator | Friday 19 September 2025 11:37:28 +0000 (0:00:03.709) 0:00:09.495 ****** 2025-09-19 11:40:06.047796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:40:06.047817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:40:06.047828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:40:06.047840 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.047852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:40:06.047874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:40:06.047886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:40:06.047903 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:06.047922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:40:06.047935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:40:06.047947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:40:06.047958 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:06.047969 | orchestrator | 2025-09-19 11:40:06.047981 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-19 11:40:06.047992 | orchestrator | Friday 19 September 2025 11:37:28 +0000 (0:00:00.563) 0:00:10.059 ****** 2025-09-19 11:40:06.048008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:40:06.048026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:40:06.048045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:40:06.048056 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.048068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:40:06.048081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:40:06.048096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:40:06.048108 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:06.048120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 11:40:06.048144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:40:06.048157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 11:40:06.048168 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:06.048179 | orchestrator | 2025-09-19 11:40:06.048190 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-19 11:40:06.048202 | orchestrator | Friday 19 September 2025 11:37:29 +0000 (0:00:00.799) 0:00:10.859 ****** 2025-09-19 11:40:06.048213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.048230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.048255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.048268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:40:06.048279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:40:06.048291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:40:06.048306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.048378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.048392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.048406 | orchestrator | 2025-09-19 11:40:06.048419 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-19 11:40:06.048431 | orchestrator | Friday 19 September 2025 11:37:33 +0000 (0:00:03.473) 0:00:14.333 ****** 2025-09-19 11:40:06.048454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.048469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:40:06.048487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.048506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:40:06.048524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.048536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:40:06.048547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.048559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.048575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.048592 | orchestrator | 2025-09-19 11:40:06.048603 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-19 11:40:06.048614 | orchestrator | Friday 19 September 2025 11:37:38 +0000 (0:00:05.283) 0:00:19.616 ****** 2025-09-19 11:40:06.048625 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:06.048636 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:06.048647 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:06.048658 | orchestrator | 2025-09-19 11:40:06.048668 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-19 11:40:06.048679 | orchestrator | Friday 19 September 2025 11:37:39 +0000 (0:00:01.391) 0:00:21.007 ****** 2025-09-19 11:40:06.048690 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.048700 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:06.048711 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:06.048722 | orchestrator | 2025-09-19 11:40:06.048733 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-19 11:40:06.048743 | orchestrator | Friday 19 September 2025 11:37:40 +0000 (0:00:00.576) 0:00:21.584 ****** 2025-09-19 11:40:06.048754 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.048764 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:06.048775 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:06.048786 | orchestrator | 2025-09-19 11:40:06.048796 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-19 11:40:06.048807 | orchestrator | Friday 19 September 2025 11:37:40 +0000 (0:00:00.301) 0:00:21.886 ****** 2025-09-19 11:40:06.048818 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.048828 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:06.048839 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:06.048849 | orchestrator | 2025-09-19 11:40:06.048860 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-19 11:40:06.048871 | orchestrator | Friday 19 September 2025 11:37:41 +0000 (0:00:00.502) 0:00:22.388 ****** 2025-09-19 11:40:06.048889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.048901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:40:06.048917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.048935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:40:06.048950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.048962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 11:40:06.048972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.048987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.049002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.049012 | orchestrator | 2025-09-19 11:40:06.049022 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 11:40:06.049031 | orchestrator | Friday 19 September 2025 11:37:43 +0000 (0:00:02.404) 0:00:24.793 ****** 2025-09-19 11:40:06.049041 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.049050 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:06.049060 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:06.049069 | orchestrator | 2025-09-19 11:40:06.049079 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-19 11:40:06.049089 | orchestrator | Friday 19 September 2025 11:37:44 +0000 (0:00:00.355) 0:00:25.148 ****** 2025-09-19 11:40:06.049098 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 11:40:06.049108 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 11:40:06.049117 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 11:40:06.049127 | orchestrator | 2025-09-19 11:40:06.049136 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-19 11:40:06.049146 | orchestrator | Friday 19 September 2025 11:37:46 +0000 (0:00:01.988) 0:00:27.136 ****** 2025-09-19 11:40:06.049156 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:40:06.049165 | orchestrator | 2025-09-19 11:40:06.049175 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-19 11:40:06.049184 | orchestrator | Friday 19 September 2025 11:37:47 +0000 (0:00:01.378) 0:00:28.515 ****** 2025-09-19 11:40:06.049194 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.049203 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:06.049213 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:06.049222 | orchestrator | 2025-09-19 11:40:06.049232 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-19 11:40:06.049241 | orchestrator | Friday 19 September 2025 11:37:47 +0000 (0:00:00.533) 0:00:29.048 ****** 2025-09-19 11:40:06.049251 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:40:06.049266 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 11:40:06.049276 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 11:40:06.049286 | orchestrator | 2025-09-19 11:40:06.049296 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-19 11:40:06.049305 | orchestrator | Friday 19 September 2025 11:37:48 +0000 (0:00:01.005) 0:00:30.054 ****** 2025-09-19 11:40:06.049337 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:06.049347 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:06.049357 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:06.049366 | orchestrator | 2025-09-19 11:40:06.049376 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-19 11:40:06.049385 | orchestrator | Friday 19 September 2025 11:37:49 +0000 (0:00:00.282) 0:00:30.336 ****** 2025-09-19 11:40:06.049395 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 11:40:06.049404 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 11:40:06.049414 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 11:40:06.049423 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 11:40:06.049433 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 11:40:06.049442 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 11:40:06.049452 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 11:40:06.049462 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 11:40:06.049471 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 11:40:06.049480 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 11:40:06.049490 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 11:40:06.049499 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 11:40:06.049509 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 11:40:06.049518 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 11:40:06.049528 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 11:40:06.049537 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 11:40:06.049547 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 11:40:06.049556 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 11:40:06.049566 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 11:40:06.049575 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 11:40:06.049589 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 11:40:06.049598 | orchestrator | 2025-09-19 11:40:06.049608 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-19 11:40:06.049618 | orchestrator | Friday 19 September 2025 11:37:58 +0000 (0:00:09.073) 0:00:39.410 ****** 2025-09-19 11:40:06.049627 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 11:40:06.049636 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 11:40:06.049646 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 11:40:06.049655 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 11:40:06.049665 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 11:40:06.049679 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 11:40:06.049689 | orchestrator | 2025-09-19 11:40:06.049698 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-19 11:40:06.049708 | orchestrator | Friday 19 September 2025 11:38:00 +0000 (0:00:02.510) 0:00:41.920 ****** 2025-09-19 11:40:06.049724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.049737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.049748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 11:40:06.049763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:40:06.049780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:40:06.049796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 11:40:06.049806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.049817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.049827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 11:40:06.049836 | orchestrator | 2025-09-19 11:40:06.049846 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 11:40:06.049856 | orchestrator | Friday 19 September 2025 11:38:03 +0000 (0:00:02.439) 0:00:44.360 ****** 2025-09-19 11:40:06.049866 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.049875 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:06.049885 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:06.049894 | orchestrator | 2025-09-19 11:40:06.049904 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-19 11:40:06.049913 | orchestrator | Friday 19 September 2025 11:38:03 +0000 (0:00:00.301) 0:00:44.661 ****** 2025-09-19 11:40:06.049931 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:06.049941 | orchestrator | 2025-09-19 11:40:06.049951 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-19 11:40:06.049961 | orchestrator | Friday 19 September 2025 11:38:05 +0000 (0:00:02.188) 0:00:46.850 ****** 2025-09-19 11:40:06.049970 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:06.049979 | orchestrator | 2025-09-19 11:40:06.049989 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-19 11:40:06.049999 | orchestrator | Friday 19 September 2025 11:38:07 +0000 (0:00:02.223) 0:00:49.073 ****** 2025-09-19 11:40:06.050008 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:06.050044 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:06.050056 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:06.050066 | orchestrator | 2025-09-19 11:40:06.050075 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-19 11:40:06.050085 | orchestrator | Friday 19 September 2025 11:38:09 +0000 (0:00:01.498) 0:00:50.572 ****** 2025-09-19 11:40:06.050095 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:06.050104 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:06.050114 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:06.050124 | orchestrator | 2025-09-19 11:40:06.050133 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-19 11:40:06.050143 | orchestrator | Friday 19 September 2025 11:38:10 +0000 (0:00:00.603) 0:00:51.176 ****** 2025-09-19 11:40:06.050153 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.050162 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:06.050172 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:06.050181 | orchestrator | 2025-09-19 11:40:06.050191 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-19 11:40:06.050201 | orchestrator | Friday 19 September 2025 11:38:10 +0000 (0:00:00.589) 0:00:51.765 ****** 2025-09-19 11:40:06.050211 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:06.050220 | orchestrator | 2025-09-19 11:40:06.050230 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-19 11:40:06.050239 | orchestrator | Friday 19 September 2025 11:38:24 +0000 (0:00:13.714) 0:01:05.479 ****** 2025-09-19 11:40:06.050249 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:06.050258 | orchestrator | 2025-09-19 11:40:06.050274 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 11:40:06.050284 | orchestrator | Friday 19 September 2025 11:38:35 +0000 (0:00:10.656) 0:01:16.136 ****** 2025-09-19 11:40:06.050294 | orchestrator | 2025-09-19 11:40:06.050303 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 11:40:06.050325 | orchestrator | Friday 19 September 2025 11:38:35 +0000 (0:00:00.098) 0:01:16.234 ****** 2025-09-19 11:40:06.050335 | orchestrator | 2025-09-19 11:40:06.050345 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 11:40:06.050355 | orchestrator | Friday 19 September 2025 11:38:35 +0000 (0:00:00.287) 0:01:16.522 ****** 2025-09-19 11:40:06.050364 | orchestrator | 2025-09-19 11:40:06.050374 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-19 11:40:06.050384 | orchestrator | Friday 19 September 2025 11:38:35 +0000 (0:00:00.069) 0:01:16.592 ****** 2025-09-19 11:40:06.050393 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:06.050403 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:06.050413 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:06.050422 | orchestrator | 2025-09-19 11:40:06.050432 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-19 11:40:06.050442 | orchestrator | Friday 19 September 2025 11:38:57 +0000 (0:00:21.853) 0:01:38.445 ****** 2025-09-19 11:40:06.050452 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:06.050461 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:06.050471 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:06.050480 | orchestrator | 2025-09-19 11:40:06.050490 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-19 11:40:06.050507 | orchestrator | Friday 19 September 2025 11:39:06 +0000 (0:00:09.333) 0:01:47.778 ****** 2025-09-19 11:40:06.050516 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:06.050526 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:40:06.050536 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:40:06.050545 | orchestrator | 2025-09-19 11:40:06.050555 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 11:40:06.050565 | orchestrator | Friday 19 September 2025 11:39:17 +0000 (0:00:11.076) 0:01:58.855 ****** 2025-09-19 11:40:06.050574 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:40:06.050584 | orchestrator | 2025-09-19 11:40:06.050594 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-19 11:40:06.050603 | orchestrator | Friday 19 September 2025 11:39:18 +0000 (0:00:00.740) 0:01:59.596 ****** 2025-09-19 11:40:06.050613 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:06.050623 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:40:06.050633 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:40:06.050642 | orchestrator | 2025-09-19 11:40:06.050652 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-19 11:40:06.050662 | orchestrator | Friday 19 September 2025 11:39:19 +0000 (0:00:00.779) 0:02:00.376 ****** 2025-09-19 11:40:06.050672 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:40:06.050681 | orchestrator | 2025-09-19 11:40:06.050691 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-19 11:40:06.050701 | orchestrator | Friday 19 September 2025 11:39:21 +0000 (0:00:01.845) 0:02:02.221 ****** 2025-09-19 11:40:06.050710 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-19 11:40:06.050720 | orchestrator | 2025-09-19 11:40:06.050730 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-19 11:40:06.050740 | orchestrator | Friday 19 September 2025 11:39:32 +0000 (0:00:11.081) 0:02:13.303 ****** 2025-09-19 11:40:06.050749 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-19 11:40:06.050759 | orchestrator | 2025-09-19 11:40:06.050769 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-19 11:40:06.050782 | orchestrator | Friday 19 September 2025 11:39:54 +0000 (0:00:22.202) 0:02:35.505 ****** 2025-09-19 11:40:06.050792 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-19 11:40:06.050802 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-19 11:40:06.050812 | orchestrator | 2025-09-19 11:40:06.050822 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-19 11:40:06.050831 | orchestrator | Friday 19 September 2025 11:40:00 +0000 (0:00:05.784) 0:02:41.289 ****** 2025-09-19 11:40:06.050841 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.050851 | orchestrator | 2025-09-19 11:40:06.050860 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-19 11:40:06.050870 | orchestrator | Friday 19 September 2025 11:40:00 +0000 (0:00:00.187) 0:02:41.477 ****** 2025-09-19 11:40:06.050880 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.050889 | orchestrator | 2025-09-19 11:40:06.050899 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-19 11:40:06.050909 | orchestrator | Friday 19 September 2025 11:40:00 +0000 (0:00:00.480) 0:02:41.958 ****** 2025-09-19 11:40:06.050918 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.050928 | orchestrator | 2025-09-19 11:40:06.050938 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-19 11:40:06.050947 | orchestrator | Friday 19 September 2025 11:40:01 +0000 (0:00:00.280) 0:02:42.238 ****** 2025-09-19 11:40:06.050957 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.050967 | orchestrator | 2025-09-19 11:40:06.050977 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-19 11:40:06.050994 | orchestrator | Friday 19 September 2025 11:40:01 +0000 (0:00:00.564) 0:02:42.803 ****** 2025-09-19 11:40:06.051004 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:40:06.051013 | orchestrator | 2025-09-19 11:40:06.051023 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 11:40:06.051033 | orchestrator | Friday 19 September 2025 11:40:05 +0000 (0:00:03.426) 0:02:46.230 ****** 2025-09-19 11:40:06.051042 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:40:06.051052 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:40:06.051062 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:40:06.051071 | orchestrator | 2025-09-19 11:40:06.051086 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:40:06.051097 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-19 11:40:06.051107 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-19 11:40:06.051117 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-19 11:40:06.051127 | orchestrator | 2025-09-19 11:40:06.051136 | orchestrator | 2025-09-19 11:40:06.051146 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:40:06.051156 | orchestrator | Friday 19 September 2025 11:40:05 +0000 (0:00:00.468) 0:02:46.699 ****** 2025-09-19 11:40:06.051165 | orchestrator | =============================================================================== 2025-09-19 11:40:06.051175 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.20s 2025-09-19 11:40:06.051185 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 21.85s 2025-09-19 11:40:06.051194 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.71s 2025-09-19 11:40:06.051204 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.08s 2025-09-19 11:40:06.051214 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.08s 2025-09-19 11:40:06.051223 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.66s 2025-09-19 11:40:06.051233 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.33s 2025-09-19 11:40:06.051243 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.07s 2025-09-19 11:40:06.051252 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.78s 2025-09-19 11:40:06.051262 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.28s 2025-09-19 11:40:06.051272 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.71s 2025-09-19 11:40:06.051282 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.47s 2025-09-19 11:40:06.051291 | orchestrator | keystone : Creating default user role ----------------------------------- 3.43s 2025-09-19 11:40:06.051301 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.51s 2025-09-19 11:40:06.051311 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.44s 2025-09-19 11:40:06.051334 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.40s 2025-09-19 11:40:06.051344 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.22s 2025-09-19 11:40:06.051354 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.19s 2025-09-19 11:40:06.051363 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.99s 2025-09-19 11:40:06.051373 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.85s 2025-09-19 11:40:06.051383 | orchestrator | 2025-09-19 11:40:06 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:06.051397 | orchestrator | 2025-09-19 11:40:06 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:06.051412 | orchestrator | 2025-09-19 11:40:06 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:40:06.051422 | orchestrator | 2025-09-19 11:40:06 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:06.051432 | orchestrator | 2025-09-19 11:40:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:09.081114 | orchestrator | 2025-09-19 11:40:09 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:09.082075 | orchestrator | 2025-09-19 11:40:09 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:09.084136 | orchestrator | 2025-09-19 11:40:09 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:09.085038 | orchestrator | 2025-09-19 11:40:09 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:40:09.086525 | orchestrator | 2025-09-19 11:40:09 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:09.086575 | orchestrator | 2025-09-19 11:40:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:12.209954 | orchestrator | 2025-09-19 11:40:12 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:12.210054 | orchestrator | 2025-09-19 11:40:12 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:12.210060 | orchestrator | 2025-09-19 11:40:12 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:12.210065 | orchestrator | 2025-09-19 11:40:12 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:40:12.210069 | orchestrator | 2025-09-19 11:40:12 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:12.210074 | orchestrator | 2025-09-19 11:40:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:15.171650 | orchestrator | 2025-09-19 11:40:15 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:15.173103 | orchestrator | 2025-09-19 11:40:15 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:15.173377 | orchestrator | 2025-09-19 11:40:15 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:15.174264 | orchestrator | 2025-09-19 11:40:15 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state STARTED 2025-09-19 11:40:15.175287 | orchestrator | 2025-09-19 11:40:15 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:15.175332 | orchestrator | 2025-09-19 11:40:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:18.211560 | orchestrator | 2025-09-19 11:40:18 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:18.212539 | orchestrator | 2025-09-19 11:40:18 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:18.212579 | orchestrator | 2025-09-19 11:40:18 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:18.214840 | orchestrator | 2025-09-19 11:40:18 | INFO  | Task 27b9e70b-4bd7-47c7-9678-f2d69d059b7e is in state SUCCESS 2025-09-19 11:40:18.214877 | orchestrator | 2025-09-19 11:40:18 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:18.214887 | orchestrator | 2025-09-19 11:40:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:21.234906 | orchestrator | 2025-09-19 11:40:21 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:21.236372 | orchestrator | 2025-09-19 11:40:21 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:40:21.237050 | orchestrator | 2025-09-19 11:40:21 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:21.238068 | orchestrator | 2025-09-19 11:40:21 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:21.239327 | orchestrator | 2025-09-19 11:40:21 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:21.239351 | orchestrator | 2025-09-19 11:40:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:24.262968 | orchestrator | 2025-09-19 11:40:24 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:24.264694 | orchestrator | 2025-09-19 11:40:24 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:40:24.265468 | orchestrator | 2025-09-19 11:40:24 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:24.268295 | orchestrator | 2025-09-19 11:40:24 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:24.268954 | orchestrator | 2025-09-19 11:40:24 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:24.268984 | orchestrator | 2025-09-19 11:40:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:27.294104 | orchestrator | 2025-09-19 11:40:27 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:27.295842 | orchestrator | 2025-09-19 11:40:27 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:40:27.298349 | orchestrator | 2025-09-19 11:40:27 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:27.299864 | orchestrator | 2025-09-19 11:40:27 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:27.301780 | orchestrator | 2025-09-19 11:40:27 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:27.301802 | orchestrator | 2025-09-19 11:40:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:30.342540 | orchestrator | 2025-09-19 11:40:30 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:30.342646 | orchestrator | 2025-09-19 11:40:30 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:40:30.343708 | orchestrator | 2025-09-19 11:40:30 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:30.344445 | orchestrator | 2025-09-19 11:40:30 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:30.346641 | orchestrator | 2025-09-19 11:40:30 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:30.346698 | orchestrator | 2025-09-19 11:40:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:33.374379 | orchestrator | 2025-09-19 11:40:33 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:33.374448 | orchestrator | 2025-09-19 11:40:33 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:40:33.375225 | orchestrator | 2025-09-19 11:40:33 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:33.375456 | orchestrator | 2025-09-19 11:40:33 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:33.376476 | orchestrator | 2025-09-19 11:40:33 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:33.376530 | orchestrator | 2025-09-19 11:40:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:36.756780 | orchestrator | 2025-09-19 11:40:36 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:36.756861 | orchestrator | 2025-09-19 11:40:36 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:40:36.756871 | orchestrator | 2025-09-19 11:40:36 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:36.756878 | orchestrator | 2025-09-19 11:40:36 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:36.756885 | orchestrator | 2025-09-19 11:40:36 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:36.756893 | orchestrator | 2025-09-19 11:40:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:39.439982 | orchestrator | 2025-09-19 11:40:39 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:39.441260 | orchestrator | 2025-09-19 11:40:39 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:40:39.441611 | orchestrator | 2025-09-19 11:40:39 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:39.442284 | orchestrator | 2025-09-19 11:40:39 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:39.442920 | orchestrator | 2025-09-19 11:40:39 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:39.442955 | orchestrator | 2025-09-19 11:40:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:42.466738 | orchestrator | 2025-09-19 11:40:42 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:42.466939 | orchestrator | 2025-09-19 11:40:42 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:40:42.467471 | orchestrator | 2025-09-19 11:40:42 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:42.468141 | orchestrator | 2025-09-19 11:40:42 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:42.468917 | orchestrator | 2025-09-19 11:40:42 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:42.468939 | orchestrator | 2025-09-19 11:40:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:45.501186 | orchestrator | 2025-09-19 11:40:45 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:45.502178 | orchestrator | 2025-09-19 11:40:45 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:40:45.502465 | orchestrator | 2025-09-19 11:40:45 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:45.503402 | orchestrator | 2025-09-19 11:40:45 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:45.503895 | orchestrator | 2025-09-19 11:40:45 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:45.503919 | orchestrator | 2025-09-19 11:40:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:48.525442 | orchestrator | 2025-09-19 11:40:48 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:48.525771 | orchestrator | 2025-09-19 11:40:48 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:40:48.526425 | orchestrator | 2025-09-19 11:40:48 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:48.527702 | orchestrator | 2025-09-19 11:40:48 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:48.528387 | orchestrator | 2025-09-19 11:40:48 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:48.528424 | orchestrator | 2025-09-19 11:40:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:51.560957 | orchestrator | 2025-09-19 11:40:51 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:51.561601 | orchestrator | 2025-09-19 11:40:51 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:40:51.562370 | orchestrator | 2025-09-19 11:40:51 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:51.563898 | orchestrator | 2025-09-19 11:40:51 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:51.564731 | orchestrator | 2025-09-19 11:40:51 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:51.564752 | orchestrator | 2025-09-19 11:40:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:54.593871 | orchestrator | 2025-09-19 11:40:54 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:54.594767 | orchestrator | 2025-09-19 11:40:54 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:40:54.595849 | orchestrator | 2025-09-19 11:40:54 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:54.596618 | orchestrator | 2025-09-19 11:40:54 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:54.597727 | orchestrator | 2025-09-19 11:40:54 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:54.597753 | orchestrator | 2025-09-19 11:40:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:40:57.674968 | orchestrator | 2025-09-19 11:40:57 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:40:57.675058 | orchestrator | 2025-09-19 11:40:57 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:40:57.675071 | orchestrator | 2025-09-19 11:40:57 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:40:57.675081 | orchestrator | 2025-09-19 11:40:57 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:40:57.675091 | orchestrator | 2025-09-19 11:40:57 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:40:57.675102 | orchestrator | 2025-09-19 11:40:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:00.698252 | orchestrator | 2025-09-19 11:41:00 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state STARTED 2025-09-19 11:41:00.698840 | orchestrator | 2025-09-19 11:41:00 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:00.701922 | orchestrator | 2025-09-19 11:41:00 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:00.702808 | orchestrator | 2025-09-19 11:41:00 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:00.704836 | orchestrator | 2025-09-19 11:41:00 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:00.704871 | orchestrator | 2025-09-19 11:41:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:03.726509 | orchestrator | 2025-09-19 11:41:03 | INFO  | Task 9c396114-dfac-49dd-abb8-5d10056aa140 is in state SUCCESS 2025-09-19 11:41:03.726590 | orchestrator | 2025-09-19 11:41:03.726605 | orchestrator | 2025-09-19 11:41:03.726641 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:41:03.726653 | orchestrator | 2025-09-19 11:41:03.726704 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:41:03.726716 | orchestrator | Friday 19 September 2025 11:39:35 +0000 (0:00:00.193) 0:00:00.193 ****** 2025-09-19 11:41:03.726801 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:41:03.726818 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:41:03.726829 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:41:03.726839 | orchestrator | ok: [testbed-manager] 2025-09-19 11:41:03.726850 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:41:03.726860 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:41:03.726871 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:41:03.726882 | orchestrator | 2025-09-19 11:41:03.726893 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:41:03.726904 | orchestrator | Friday 19 September 2025 11:39:36 +0000 (0:00:00.850) 0:00:01.043 ****** 2025-09-19 11:41:03.726914 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-19 11:41:03.726925 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-19 11:41:03.726936 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-19 11:41:03.726946 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-19 11:41:03.726957 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-19 11:41:03.726967 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-19 11:41:03.726978 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-19 11:41:03.726988 | orchestrator | 2025-09-19 11:41:03.726999 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-19 11:41:03.727009 | orchestrator | 2025-09-19 11:41:03.727020 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-19 11:41:03.727031 | orchestrator | Friday 19 September 2025 11:39:37 +0000 (0:00:00.716) 0:00:01.760 ****** 2025-09-19 11:41:03.727042 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:41:03.727054 | orchestrator | 2025-09-19 11:41:03.727065 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-19 11:41:03.727076 | orchestrator | Friday 19 September 2025 11:39:39 +0000 (0:00:02.232) 0:00:03.993 ****** 2025-09-19 11:41:03.727086 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-19 11:41:03.727097 | orchestrator | 2025-09-19 11:41:03.727107 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-19 11:41:03.727118 | orchestrator | Friday 19 September 2025 11:39:51 +0000 (0:00:11.467) 0:00:15.461 ****** 2025-09-19 11:41:03.727129 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-19 11:41:03.727141 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-19 11:41:03.727151 | orchestrator | 2025-09-19 11:41:03.727162 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-19 11:41:03.727172 | orchestrator | Friday 19 September 2025 11:39:57 +0000 (0:00:06.579) 0:00:22.040 ****** 2025-09-19 11:41:03.727264 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:41:03.727301 | orchestrator | 2025-09-19 11:41:03.727314 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-19 11:41:03.727325 | orchestrator | Friday 19 September 2025 11:40:00 +0000 (0:00:03.179) 0:00:25.219 ****** 2025-09-19 11:41:03.727336 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:41:03.727347 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-19 11:41:03.727357 | orchestrator | 2025-09-19 11:41:03.727368 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-19 11:41:03.727389 | orchestrator | Friday 19 September 2025 11:40:04 +0000 (0:00:03.850) 0:00:29.070 ****** 2025-09-19 11:41:03.727400 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:41:03.727411 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-19 11:41:03.727422 | orchestrator | 2025-09-19 11:41:03.727432 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-19 11:41:03.727443 | orchestrator | Friday 19 September 2025 11:40:11 +0000 (0:00:06.615) 0:00:35.686 ****** 2025-09-19 11:41:03.727454 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-19 11:41:03.727464 | orchestrator | 2025-09-19 11:41:03.727475 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:41:03.727491 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:41:03.727502 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:41:03.727513 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:41:03.727524 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:41:03.727535 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:41:03.727560 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:41:03.727571 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:41:03.727582 | orchestrator | 2025-09-19 11:41:03.727593 | orchestrator | 2025-09-19 11:41:03.727604 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:41:03.727614 | orchestrator | Friday 19 September 2025 11:40:16 +0000 (0:00:05.600) 0:00:41.287 ****** 2025-09-19 11:41:03.727625 | orchestrator | =============================================================================== 2025-09-19 11:41:03.727635 | orchestrator | service-ks-register : ceph-rgw | Creating services --------------------- 11.47s 2025-09-19 11:41:03.727646 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.62s 2025-09-19 11:41:03.727657 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.58s 2025-09-19 11:41:03.727667 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.60s 2025-09-19 11:41:03.727678 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.85s 2025-09-19 11:41:03.727688 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.18s 2025-09-19 11:41:03.727699 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.23s 2025-09-19 11:41:03.727709 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.85s 2025-09-19 11:41:03.727719 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2025-09-19 11:41:03.727730 | orchestrator | 2025-09-19 11:41:03.727741 | orchestrator | 2025-09-19 11:41:03.727752 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-19 11:41:03.727762 | orchestrator | 2025-09-19 11:41:03.727773 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-19 11:41:03.727783 | orchestrator | Friday 19 September 2025 11:39:29 +0000 (0:00:00.282) 0:00:00.282 ****** 2025-09-19 11:41:03.727794 | orchestrator | changed: [testbed-manager] 2025-09-19 11:41:03.727804 | orchestrator | 2025-09-19 11:41:03.727815 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-19 11:41:03.727826 | orchestrator | Friday 19 September 2025 11:39:30 +0000 (0:00:01.788) 0:00:02.070 ****** 2025-09-19 11:41:03.727842 | orchestrator | changed: [testbed-manager] 2025-09-19 11:41:03.727853 | orchestrator | 2025-09-19 11:41:03.727864 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-19 11:41:03.727874 | orchestrator | Friday 19 September 2025 11:39:31 +0000 (0:00:01.031) 0:00:03.103 ****** 2025-09-19 11:41:03.727885 | orchestrator | changed: [testbed-manager] 2025-09-19 11:41:03.727896 | orchestrator | 2025-09-19 11:41:03.727979 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-19 11:41:03.727992 | orchestrator | Friday 19 September 2025 11:39:33 +0000 (0:00:01.189) 0:00:04.292 ****** 2025-09-19 11:41:03.728005 | orchestrator | changed: [testbed-manager] 2025-09-19 11:41:03.728017 | orchestrator | 2025-09-19 11:41:03.728029 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-19 11:41:03.728042 | orchestrator | Friday 19 September 2025 11:39:34 +0000 (0:00:01.609) 0:00:05.901 ****** 2025-09-19 11:41:03.728054 | orchestrator | changed: [testbed-manager] 2025-09-19 11:41:03.728067 | orchestrator | 2025-09-19 11:41:03.728078 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-19 11:41:03.728088 | orchestrator | Friday 19 September 2025 11:39:35 +0000 (0:00:01.024) 0:00:06.926 ****** 2025-09-19 11:41:03.728099 | orchestrator | changed: [testbed-manager] 2025-09-19 11:41:03.728110 | orchestrator | 2025-09-19 11:41:03.728120 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-19 11:41:03.728131 | orchestrator | Friday 19 September 2025 11:39:36 +0000 (0:00:00.959) 0:00:07.886 ****** 2025-09-19 11:41:03.728142 | orchestrator | changed: [testbed-manager] 2025-09-19 11:41:03.728152 | orchestrator | 2025-09-19 11:41:03.728163 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-19 11:41:03.728173 | orchestrator | Friday 19 September 2025 11:39:37 +0000 (0:00:01.091) 0:00:08.977 ****** 2025-09-19 11:41:03.728184 | orchestrator | changed: [testbed-manager] 2025-09-19 11:41:03.728195 | orchestrator | 2025-09-19 11:41:03.728206 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-19 11:41:03.728216 | orchestrator | Friday 19 September 2025 11:39:38 +0000 (0:00:00.946) 0:00:09.924 ****** 2025-09-19 11:41:03.728227 | orchestrator | changed: [testbed-manager] 2025-09-19 11:41:03.728237 | orchestrator | 2025-09-19 11:41:03.728248 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-19 11:41:03.728258 | orchestrator | Friday 19 September 2025 11:40:36 +0000 (0:00:58.209) 0:01:08.134 ****** 2025-09-19 11:41:03.728269 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:41:03.728308 | orchestrator | 2025-09-19 11:41:03.728320 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 11:41:03.728330 | orchestrator | 2025-09-19 11:41:03.728347 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 11:41:03.728358 | orchestrator | Friday 19 September 2025 11:40:37 +0000 (0:00:00.170) 0:01:08.305 ****** 2025-09-19 11:41:03.728368 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:41:03.728379 | orchestrator | 2025-09-19 11:41:03.728390 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 11:41:03.728400 | orchestrator | 2025-09-19 11:41:03.728411 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 11:41:03.728421 | orchestrator | Friday 19 September 2025 11:40:38 +0000 (0:00:01.621) 0:01:09.926 ****** 2025-09-19 11:41:03.728432 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:41:03.728443 | orchestrator | 2025-09-19 11:41:03.728453 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 11:41:03.728464 | orchestrator | 2025-09-19 11:41:03.728475 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 11:41:03.728499 | orchestrator | Friday 19 September 2025 11:40:49 +0000 (0:00:11.276) 0:01:21.203 ****** 2025-09-19 11:41:03.728518 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:41:03.728534 | orchestrator | 2025-09-19 11:41:03.728545 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:41:03.728565 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 11:41:03.728576 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:41:03.728586 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:41:03.728597 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:41:03.728608 | orchestrator | 2025-09-19 11:41:03.728619 | orchestrator | 2025-09-19 11:41:03.728630 | orchestrator | 2025-09-19 11:41:03.728640 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:41:03.728742 | orchestrator | Friday 19 September 2025 11:41:01 +0000 (0:00:11.159) 0:01:32.362 ****** 2025-09-19 11:41:03.728754 | orchestrator | =============================================================================== 2025-09-19 11:41:03.728765 | orchestrator | Create admin user ------------------------------------------------------ 58.21s 2025-09-19 11:41:03.728776 | orchestrator | Restart ceph manager service ------------------------------------------- 24.06s 2025-09-19 11:41:03.728838 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.79s 2025-09-19 11:41:03.728850 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.61s 2025-09-19 11:41:03.728861 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.19s 2025-09-19 11:41:03.728872 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.09s 2025-09-19 11:41:03.728882 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.03s 2025-09-19 11:41:03.728893 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.02s 2025-09-19 11:41:03.728903 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.96s 2025-09-19 11:41:03.728914 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.95s 2025-09-19 11:41:03.728925 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2025-09-19 11:41:03.728936 | orchestrator | 2025-09-19 11:41:03 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:03.728947 | orchestrator | 2025-09-19 11:41:03 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:03.728958 | orchestrator | 2025-09-19 11:41:03 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:03.728975 | orchestrator | 2025-09-19 11:41:03 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:03.728986 | orchestrator | 2025-09-19 11:41:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:06.790574 | orchestrator | 2025-09-19 11:41:06 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:06.790657 | orchestrator | 2025-09-19 11:41:06 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:06.790671 | orchestrator | 2025-09-19 11:41:06 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:06.790682 | orchestrator | 2025-09-19 11:41:06 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:06.790693 | orchestrator | 2025-09-19 11:41:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:09.797967 | orchestrator | 2025-09-19 11:41:09 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:09.798404 | orchestrator | 2025-09-19 11:41:09 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:09.800463 | orchestrator | 2025-09-19 11:41:09 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:09.800581 | orchestrator | 2025-09-19 11:41:09 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:09.800609 | orchestrator | 2025-09-19 11:41:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:12.832653 | orchestrator | 2025-09-19 11:41:12 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:12.832741 | orchestrator | 2025-09-19 11:41:12 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:12.832755 | orchestrator | 2025-09-19 11:41:12 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:12.832767 | orchestrator | 2025-09-19 11:41:12 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:12.832778 | orchestrator | 2025-09-19 11:41:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:15.881540 | orchestrator | 2025-09-19 11:41:15 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:15.881776 | orchestrator | 2025-09-19 11:41:15 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:15.883115 | orchestrator | 2025-09-19 11:41:15 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:15.883834 | orchestrator | 2025-09-19 11:41:15 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:15.883858 | orchestrator | 2025-09-19 11:41:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:18.905475 | orchestrator | 2025-09-19 11:41:18 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:18.905570 | orchestrator | 2025-09-19 11:41:18 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:18.906219 | orchestrator | 2025-09-19 11:41:18 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:18.906870 | orchestrator | 2025-09-19 11:41:18 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:18.907095 | orchestrator | 2025-09-19 11:41:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:21.961146 | orchestrator | 2025-09-19 11:41:21 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:21.961230 | orchestrator | 2025-09-19 11:41:21 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:21.961244 | orchestrator | 2025-09-19 11:41:21 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:21.961256 | orchestrator | 2025-09-19 11:41:21 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:21.961267 | orchestrator | 2025-09-19 11:41:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:24.962448 | orchestrator | 2025-09-19 11:41:24 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:24.962542 | orchestrator | 2025-09-19 11:41:24 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:24.964208 | orchestrator | 2025-09-19 11:41:24 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:24.964785 | orchestrator | 2025-09-19 11:41:24 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:24.964813 | orchestrator | 2025-09-19 11:41:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:27.991942 | orchestrator | 2025-09-19 11:41:27 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:27.992308 | orchestrator | 2025-09-19 11:41:27 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:27.992924 | orchestrator | 2025-09-19 11:41:27 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:27.993565 | orchestrator | 2025-09-19 11:41:27 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:27.993589 | orchestrator | 2025-09-19 11:41:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:31.029327 | orchestrator | 2025-09-19 11:41:31 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:31.029520 | orchestrator | 2025-09-19 11:41:31 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:31.030567 | orchestrator | 2025-09-19 11:41:31 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:31.031267 | orchestrator | 2025-09-19 11:41:31 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:31.031363 | orchestrator | 2025-09-19 11:41:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:34.056784 | orchestrator | 2025-09-19 11:41:34 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:34.057021 | orchestrator | 2025-09-19 11:41:34 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:34.057921 | orchestrator | 2025-09-19 11:41:34 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:34.058637 | orchestrator | 2025-09-19 11:41:34 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:34.058680 | orchestrator | 2025-09-19 11:41:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:37.093631 | orchestrator | 2025-09-19 11:41:37 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:37.093933 | orchestrator | 2025-09-19 11:41:37 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:37.095138 | orchestrator | 2025-09-19 11:41:37 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:37.096325 | orchestrator | 2025-09-19 11:41:37 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:37.096354 | orchestrator | 2025-09-19 11:41:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:40.139279 | orchestrator | 2025-09-19 11:41:40 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:40.140424 | orchestrator | 2025-09-19 11:41:40 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:40.141825 | orchestrator | 2025-09-19 11:41:40 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:40.143502 | orchestrator | 2025-09-19 11:41:40 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:40.143585 | orchestrator | 2025-09-19 11:41:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:43.177245 | orchestrator | 2025-09-19 11:41:43 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:43.177482 | orchestrator | 2025-09-19 11:41:43 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:43.178334 | orchestrator | 2025-09-19 11:41:43 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:43.179087 | orchestrator | 2025-09-19 11:41:43 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:43.179151 | orchestrator | 2025-09-19 11:41:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:46.248703 | orchestrator | 2025-09-19 11:41:46 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:46.249909 | orchestrator | 2025-09-19 11:41:46 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:46.250579 | orchestrator | 2025-09-19 11:41:46 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:46.251426 | orchestrator | 2025-09-19 11:41:46 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:46.251468 | orchestrator | 2025-09-19 11:41:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:49.297598 | orchestrator | 2025-09-19 11:41:49 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:49.298915 | orchestrator | 2025-09-19 11:41:49 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:49.301953 | orchestrator | 2025-09-19 11:41:49 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:49.302484 | orchestrator | 2025-09-19 11:41:49 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:49.302789 | orchestrator | 2025-09-19 11:41:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:52.354516 | orchestrator | 2025-09-19 11:41:52 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:52.355369 | orchestrator | 2025-09-19 11:41:52 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:52.357037 | orchestrator | 2025-09-19 11:41:52 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:52.358768 | orchestrator | 2025-09-19 11:41:52 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:52.358863 | orchestrator | 2025-09-19 11:41:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:55.399839 | orchestrator | 2025-09-19 11:41:55 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:55.400991 | orchestrator | 2025-09-19 11:41:55 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:55.401887 | orchestrator | 2025-09-19 11:41:55 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:55.405097 | orchestrator | 2025-09-19 11:41:55 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:55.405159 | orchestrator | 2025-09-19 11:41:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:41:58.439180 | orchestrator | 2025-09-19 11:41:58 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:41:58.439554 | orchestrator | 2025-09-19 11:41:58 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:41:58.442488 | orchestrator | 2025-09-19 11:41:58 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:41:58.444397 | orchestrator | 2025-09-19 11:41:58 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:41:58.444420 | orchestrator | 2025-09-19 11:41:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:01.491019 | orchestrator | 2025-09-19 11:42:01 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:01.491122 | orchestrator | 2025-09-19 11:42:01 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:42:01.491390 | orchestrator | 2025-09-19 11:42:01 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:01.491900 | orchestrator | 2025-09-19 11:42:01 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:42:01.491924 | orchestrator | 2025-09-19 11:42:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:04.525815 | orchestrator | 2025-09-19 11:42:04 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:04.526073 | orchestrator | 2025-09-19 11:42:04 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:42:04.527170 | orchestrator | 2025-09-19 11:42:04 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:04.528000 | orchestrator | 2025-09-19 11:42:04 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:42:04.528033 | orchestrator | 2025-09-19 11:42:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:07.566571 | orchestrator | 2025-09-19 11:42:07 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:07.566834 | orchestrator | 2025-09-19 11:42:07 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:42:07.567417 | orchestrator | 2025-09-19 11:42:07 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:07.568142 | orchestrator | 2025-09-19 11:42:07 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:42:07.568163 | orchestrator | 2025-09-19 11:42:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:10.607976 | orchestrator | 2025-09-19 11:42:10 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:10.615827 | orchestrator | 2025-09-19 11:42:10 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:42:10.617335 | orchestrator | 2025-09-19 11:42:10 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:10.618773 | orchestrator | 2025-09-19 11:42:10 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:42:10.618800 | orchestrator | 2025-09-19 11:42:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:13.643332 | orchestrator | 2025-09-19 11:42:13 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:13.643481 | orchestrator | 2025-09-19 11:42:13 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:42:13.644023 | orchestrator | 2025-09-19 11:42:13 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:13.644740 | orchestrator | 2025-09-19 11:42:13 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:42:13.644763 | orchestrator | 2025-09-19 11:42:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:16.672479 | orchestrator | 2025-09-19 11:42:16 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:16.673662 | orchestrator | 2025-09-19 11:42:16 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:42:16.675173 | orchestrator | 2025-09-19 11:42:16 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:16.676855 | orchestrator | 2025-09-19 11:42:16 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:42:16.676877 | orchestrator | 2025-09-19 11:42:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:19.714379 | orchestrator | 2025-09-19 11:42:19 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:19.715455 | orchestrator | 2025-09-19 11:42:19 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:42:19.716638 | orchestrator | 2025-09-19 11:42:19 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:19.717527 | orchestrator | 2025-09-19 11:42:19 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:42:19.717556 | orchestrator | 2025-09-19 11:42:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:22.761522 | orchestrator | 2025-09-19 11:42:22 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:22.763721 | orchestrator | 2025-09-19 11:42:22 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state STARTED 2025-09-19 11:42:22.765922 | orchestrator | 2025-09-19 11:42:22 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:22.768658 | orchestrator | 2025-09-19 11:42:22 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:42:22.768759 | orchestrator | 2025-09-19 11:42:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:25.811216 | orchestrator | 2025-09-19 11:42:25 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:42:25.811522 | orchestrator | 2025-09-19 11:42:25 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:25.812448 | orchestrator | 2025-09-19 11:42:25 | INFO  | Task 83f2667d-ddab-4b3e-92c2-0f1a0bc503cb is in state SUCCESS 2025-09-19 11:42:25.814170 | orchestrator | 2025-09-19 11:42:25.814207 | orchestrator | 2025-09-19 11:42:25.814220 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:42:25.814232 | orchestrator | 2025-09-19 11:42:25.814243 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:42:25.814278 | orchestrator | Friday 19 September 2025 11:39:28 +0000 (0:00:00.294) 0:00:00.294 ****** 2025-09-19 11:42:25.814291 | orchestrator | ok: [testbed-manager] 2025-09-19 11:42:25.814303 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:42:25.814314 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:42:25.814409 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:42:25.814423 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:42:25.814434 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:42:25.814444 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:42:25.814456 | orchestrator | 2025-09-19 11:42:25.814467 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:42:25.814478 | orchestrator | Friday 19 September 2025 11:39:29 +0000 (0:00:00.867) 0:00:01.161 ****** 2025-09-19 11:42:25.814490 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-19 11:42:25.814501 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-19 11:42:25.814512 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-19 11:42:25.814523 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-19 11:42:25.814534 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-19 11:42:25.814545 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-19 11:42:25.814556 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-19 11:42:25.814567 | orchestrator | 2025-09-19 11:42:25.814578 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-19 11:42:25.814589 | orchestrator | 2025-09-19 11:42:25.814600 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-19 11:42:25.814610 | orchestrator | Friday 19 September 2025 11:39:30 +0000 (0:00:00.811) 0:00:01.973 ****** 2025-09-19 11:42:25.814623 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:42:25.814660 | orchestrator | 2025-09-19 11:42:25.814672 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-19 11:42:25.814683 | orchestrator | Friday 19 September 2025 11:39:32 +0000 (0:00:01.634) 0:00:03.608 ****** 2025-09-19 11:42:25.814712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.814776 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:42:25.814793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.814806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.814834 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.814848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.814886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.814912 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.814931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.814944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.815063 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.815087 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:42:25.815102 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.815121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.815138 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.815150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.815161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.815173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.815191 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.815204 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.815216 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.815236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.815308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.815323 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.815426 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.815439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.815459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.815471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.815492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.815503 | orchestrator | 2025-09-19 11:42:25.815514 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-19 11:42:25.815526 | orchestrator | Friday 19 September 2025 11:39:36 +0000 (0:00:03.952) 0:00:07.561 ****** 2025-09-19 11:42:25.815537 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:42:25.815548 | orchestrator | 2025-09-19 11:42:25.815559 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-19 11:42:25.815570 | orchestrator | Friday 19 September 2025 11:39:37 +0000 (0:00:01.309) 0:00:08.870 ****** 2025-09-19 11:42:25.815596 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:42:25.815617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.815637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.815665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.815696 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.815715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.815733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.815760 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.815780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.815798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.815818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.815839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.815859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.815871 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.815887 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.815900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.816075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.816104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.816123 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.816178 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.816200 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.816221 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:42:25.816298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.816319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.816330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.817535 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.817567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.817579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.817590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.817601 | orchestrator | 2025-09-19 11:42:25.817613 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-19 11:42:25.817625 | orchestrator | Friday 19 September 2025 11:39:43 +0000 (0:00:06.098) 0:00:14.968 ****** 2025-09-19 11:42:25.817646 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 11:42:25.817659 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.817671 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.817703 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 11:42:25.817716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.817728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.817744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.817756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.817767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.817786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.817798 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:42:25.817817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.817829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.817840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.817851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.817867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.817879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.817898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.817909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.817926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.817938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.817949 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:25.817960 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:25.817971 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:25.817982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.817999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818078 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:25.818091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.818102 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818137 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:25.818150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.818162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818187 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:25.818199 | orchestrator | 2025-09-19 11:42:25.818217 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-19 11:42:25.818230 | orchestrator | Friday 19 September 2025 11:39:45 +0000 (0:00:01.652) 0:00:16.621 ****** 2025-09-19 11:42:25.818242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.818285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.818298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.818317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.818345 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 11:42:25.818362 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.818375 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818395 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 11:42:25.818425 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.818439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.818451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.818463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.818485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.818515 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:25.818526 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:42:25.818537 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:25.818548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.818560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818590 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:25.818602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.818613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.818625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.818649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 11:42:25.818672 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:25.818683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.818700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818724 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:25.818735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 11:42:25.818746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 11:42:25.818782 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:25.818793 | orchestrator | 2025-09-19 11:42:25.818804 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-19 11:42:25.818815 | orchestrator | Friday 19 September 2025 11:39:47 +0000 (0:00:02.102) 0:00:18.724 ****** 2025-09-19 11:42:25.818826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.818838 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:42:25.818854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.818866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.818878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.818898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.818914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.818926 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.818937 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.818948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.818965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.818977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.818988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.819006 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.819022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.819034 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.819045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.819057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.819074 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.819086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.819108 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.819120 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.819138 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:42:25.819150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.819167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.819179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.819197 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.819208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.819224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.819235 | orchestrator | 2025-09-19 11:42:25.819247 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-19 11:42:25.819290 | orchestrator | Friday 19 September 2025 11:39:53 +0000 (0:00:05.721) 0:00:24.446 ****** 2025-09-19 11:42:25.819301 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:42:25.819312 | orchestrator | 2025-09-19 11:42:25.819323 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-19 11:42:25.819334 | orchestrator | Friday 19 September 2025 11:39:54 +0000 (0:00:01.070) 0:00:25.516 ****** 2025-09-19 11:42:25.819345 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090225, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819357 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090225, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819375 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090225, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819394 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090225, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819406 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090239, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1640666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819417 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090225, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819437 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090239, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1640666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819448 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090225, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819460 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090239, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1640666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819476 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090222, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.153551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819495 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090239, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1640666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819506 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090225, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.819517 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090222, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.153551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819534 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090222, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.153551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819546 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090234, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819557 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090239, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1640666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819574 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090239, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1640666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819592 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090234, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819604 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090222, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.153551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819615 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090234, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819631 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090218, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.150551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819643 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090222, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.153551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819654 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090234, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819680 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090222, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.153551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819692 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090218, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.150551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819703 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090218, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.150551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819714 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090234, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819730 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090234, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819741 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090218, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.150551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819753 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090226, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1562428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819778 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090239, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1640666, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.819790 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090226, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1562428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819801 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090226, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1562428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819813 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090226, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1562428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819829 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090218, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.150551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819840 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090218, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.150551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819852 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090232, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819875 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090232, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819886 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090232, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819898 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090232, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819909 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090226, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1562428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819925 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090226, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1562428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819936 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090228, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1571982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819947 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090228, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1571982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819970 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090228, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1571982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819982 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090228, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1571982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.819994 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090224, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820005 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090232, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820021 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090222, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.153551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.820032 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090232, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820050 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090224, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820067 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090224, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820079 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090238, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1625512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820096 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090224, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820116 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090228, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1571982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820142 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090228, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1571982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820164 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090238, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1625512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820200 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090238, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1625512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820216 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090216, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1494675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820235 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090238, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1625512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820247 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090224, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820317 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090216, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1494675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820335 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090238, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1625512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820346 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090224, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820368 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090216, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1494675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820380 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090216, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1494675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820491 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090246, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1670575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820506 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090216, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1494675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820517 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090246, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1670575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820532 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090246, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1670575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820551 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090246, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1670575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820561 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090236, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1618884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820571 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090236, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1618884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820609 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090234, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.820620 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090238, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1625512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820630 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090246, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1670575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820645 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090236, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1618884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820661 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090236, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1618884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820671 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090220, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.151551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820681 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090220, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.151551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820696 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090216, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1494675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820706 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090236, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1618884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820716 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090220, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.151551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820730 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090220, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.151551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820748 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090217, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1501033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820758 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090217, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1501033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820768 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090220, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.151551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820783 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090218, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.150551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.820793 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090231, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1585512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820804 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090217, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1501033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820818 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090246, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1670575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820835 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090217, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1501033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820845 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090217, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1501033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820855 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090231, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1585512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820870 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090229, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1575432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820880 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090244, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.165597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820890 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090231, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1585512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820908 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:25.820923 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090236, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1618884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820933 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090231, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1585512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820943 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090231, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1585512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820953 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090229, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1575432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820968 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090220, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.151551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.820978 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090226, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1562428, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.820988 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090229, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1575432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.821008 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090229, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1575432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.821018 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090217, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1501033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.821028 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090244, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.165597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.821038 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:25.821048 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090229, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1575432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.821062 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090244, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.165597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.821073 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090231, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1585512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.821093 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:25.821103 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090244, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.165597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.821112 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:25.821126 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090244, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.165597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.821136 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:25.821146 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090229, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1575432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.821156 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090232, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1595511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.821166 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090244, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.165597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 11:42:25.821176 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:25.821191 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090228, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1571982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.821201 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090224, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.154551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.821218 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090238, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1625512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.821232 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090216, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1494675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.821242 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090246, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1670575, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.821271 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090236, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1618884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.821282 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090220, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.151551, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.821297 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090217, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1501033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.821307 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090231, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1585512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.821323 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090229, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1575432, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.821338 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090244, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.165597, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 11:42:25.821348 | orchestrator | 2025-09-19 11:42:25.821358 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-19 11:42:25.821368 | orchestrator | Friday 19 September 2025 11:40:18 +0000 (0:00:24.521) 0:00:50.037 ****** 2025-09-19 11:42:25.821377 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:42:25.821387 | orchestrator | 2025-09-19 11:42:25.821396 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-19 11:42:25.821406 | orchestrator | Friday 19 September 2025 11:40:19 +0000 (0:00:00.682) 0:00:50.720 ****** 2025-09-19 11:42:25.821416 | orchestrator | [WARNING]: Skipped 2025-09-19 11:42:25.821426 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821436 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-19 11:42:25.821445 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821455 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-19 11:42:25.821479 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:42:25.821489 | orchestrator | [WARNING]: Skipped 2025-09-19 11:42:25.821499 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821508 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-19 11:42:25.821518 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821527 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-19 11:42:25.821536 | orchestrator | [WARNING]: Skipped 2025-09-19 11:42:25.821546 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821555 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-19 11:42:25.821564 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821574 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-19 11:42:25.821583 | orchestrator | [WARNING]: Skipped 2025-09-19 11:42:25.821593 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821602 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-19 11:42:25.821619 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821629 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-19 11:42:25.821638 | orchestrator | [WARNING]: Skipped 2025-09-19 11:42:25.821648 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821661 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-19 11:42:25.821671 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821681 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-19 11:42:25.821690 | orchestrator | [WARNING]: Skipped 2025-09-19 11:42:25.821700 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821709 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-19 11:42:25.821719 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821728 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-19 11:42:25.821738 | orchestrator | [WARNING]: Skipped 2025-09-19 11:42:25.821747 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821757 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-19 11:42:25.821766 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 11:42:25.821776 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-19 11:42:25.821785 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:42:25.821794 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 11:42:25.821804 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 11:42:25.821813 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 11:42:25.821822 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 11:42:25.821832 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 11:42:25.821841 | orchestrator | 2025-09-19 11:42:25.821850 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-19 11:42:25.821860 | orchestrator | Friday 19 September 2025 11:40:21 +0000 (0:00:01.953) 0:00:52.674 ****** 2025-09-19 11:42:25.821869 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 11:42:25.821879 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:25.821888 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 11:42:25.821898 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:25.821907 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 11:42:25.821916 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:25.821926 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 11:42:25.821936 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:25.821945 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 11:42:25.821955 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:25.821969 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 11:42:25.821978 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:25.821988 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-19 11:42:25.821997 | orchestrator | 2025-09-19 11:42:25.822007 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-19 11:42:25.822043 | orchestrator | Friday 19 September 2025 11:40:36 +0000 (0:00:15.594) 0:01:08.268 ****** 2025-09-19 11:42:25.822054 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 11:42:25.822064 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:25.822081 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 11:42:25.822090 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:25.822100 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 11:42:25.822109 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:25.822118 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 11:42:25.822128 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:25.822137 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 11:42:25.822146 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:25.822156 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 11:42:25.822165 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:25.822174 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-19 11:42:25.822184 | orchestrator | 2025-09-19 11:42:25.822193 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-19 11:42:25.822203 | orchestrator | Friday 19 September 2025 11:40:40 +0000 (0:00:03.754) 0:01:12.023 ****** 2025-09-19 11:42:25.822212 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 11:42:25.822222 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 11:42:25.822232 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:25.822241 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 11:42:25.822296 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:25.822309 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:25.822324 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 11:42:25.822334 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:25.822344 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-19 11:42:25.822354 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 11:42:25.822363 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:25.822372 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 11:42:25.822382 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:25.822391 | orchestrator | 2025-09-19 11:42:25.822401 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-19 11:42:25.822410 | orchestrator | Friday 19 September 2025 11:40:41 +0000 (0:00:01.310) 0:01:13.333 ****** 2025-09-19 11:42:25.822420 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:42:25.822429 | orchestrator | 2025-09-19 11:42:25.822439 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-19 11:42:25.822448 | orchestrator | Friday 19 September 2025 11:40:42 +0000 (0:00:00.869) 0:01:14.203 ****** 2025-09-19 11:42:25.822458 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:42:25.822467 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:25.822477 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:25.822486 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:25.822495 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:25.822505 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:25.822514 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:25.822524 | orchestrator | 2025-09-19 11:42:25.822533 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-19 11:42:25.822550 | orchestrator | Friday 19 September 2025 11:40:43 +0000 (0:00:01.088) 0:01:15.292 ****** 2025-09-19 11:42:25.822560 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:42:25.822569 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:25.822578 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:25.822588 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:42:25.822597 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:25.822606 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:25.822616 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:42:25.822625 | orchestrator | 2025-09-19 11:42:25.822634 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-19 11:42:25.822644 | orchestrator | Friday 19 September 2025 11:40:46 +0000 (0:00:02.839) 0:01:18.132 ****** 2025-09-19 11:42:25.822653 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:42:25.822663 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:42:25.822677 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:42:25.822687 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:25.822696 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:42:25.822705 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:42:25.822715 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:42:25.822724 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:25.822733 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:25.822743 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:25.822752 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:42:25.822762 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:25.822771 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 11:42:25.822778 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:25.822786 | orchestrator | 2025-09-19 11:42:25.822794 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-19 11:42:25.822801 | orchestrator | Friday 19 September 2025 11:40:48 +0000 (0:00:02.104) 0:01:20.236 ****** 2025-09-19 11:42:25.822809 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 11:42:25.822817 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:25.822825 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 11:42:25.822833 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:25.822840 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 11:42:25.822848 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:25.822856 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 11:42:25.822863 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:25.822871 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 11:42:25.822879 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:25.822887 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 11:42:25.822895 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:25.822903 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-19 11:42:25.822910 | orchestrator | 2025-09-19 11:42:25.822922 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-19 11:42:25.822936 | orchestrator | Friday 19 September 2025 11:40:51 +0000 (0:00:02.304) 0:01:22.541 ****** 2025-09-19 11:42:25.822943 | orchestrator | [WARNING]: Skipped 2025-09-19 11:42:25.822951 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-19 11:42:25.822959 | orchestrator | due to this access issue: 2025-09-19 11:42:25.822967 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-19 11:42:25.822974 | orchestrator | not a directory 2025-09-19 11:42:25.822982 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 11:42:25.822990 | orchestrator | 2025-09-19 11:42:25.822998 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-19 11:42:25.823006 | orchestrator | Friday 19 September 2025 11:40:52 +0000 (0:00:01.117) 0:01:23.659 ****** 2025-09-19 11:42:25.823013 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:42:25.823021 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:25.823028 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:25.823036 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:25.823044 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:25.823051 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:25.823059 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:25.823067 | orchestrator | 2025-09-19 11:42:25.823075 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-19 11:42:25.823082 | orchestrator | Friday 19 September 2025 11:40:53 +0000 (0:00:00.841) 0:01:24.501 ****** 2025-09-19 11:42:25.823090 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:42:25.823098 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:25.823105 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:25.823113 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:25.823121 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:42:25.823128 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:42:25.823136 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:42:25.823144 | orchestrator | 2025-09-19 11:42:25.823151 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-19 11:42:25.823159 | orchestrator | Friday 19 September 2025 11:40:54 +0000 (0:00:01.229) 0:01:25.730 ****** 2025-09-19 11:42:25.823171 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 11:42:25.823180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.823189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.823202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.823216 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.823225 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.823233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.823241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.823270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.823280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.823288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.823307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.823321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 11:42:25.823329 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.823338 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.823346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.823358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.823366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.823380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.823394 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 11:42:25.823404 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.823413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.823425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.823433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.823446 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 11:42:25.823454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.823467 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.823475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.823483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 11:42:25.823491 | orchestrator | 2025-09-19 11:42:25.823499 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-19 11:42:25.823507 | orchestrator | Friday 19 September 2025 11:40:58 +0000 (0:00:04.270) 0:01:30.000 ****** 2025-09-19 11:42:25.823515 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 11:42:25.823523 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:42:25.823531 | orchestrator | 2025-09-19 11:42:25.823539 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:42:25.823546 | orchestrator | Friday 19 September 2025 11:41:00 +0000 (0:00:01.433) 0:01:31.433 ****** 2025-09-19 11:42:25.823554 | orchestrator | 2025-09-19 11:42:25.823562 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:42:25.823570 | orchestrator | Friday 19 September 2025 11:41:00 +0000 (0:00:00.104) 0:01:31.538 ****** 2025-09-19 11:42:25.823577 | orchestrator | 2025-09-19 11:42:25.823585 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:42:25.823593 | orchestrator | Friday 19 September 2025 11:41:00 +0000 (0:00:00.072) 0:01:31.611 ****** 2025-09-19 11:42:25.823605 | orchestrator | 2025-09-19 11:42:25.823617 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:42:25.823625 | orchestrator | Friday 19 September 2025 11:41:00 +0000 (0:00:00.332) 0:01:31.944 ****** 2025-09-19 11:42:25.823633 | orchestrator | 2025-09-19 11:42:25.823640 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:42:25.823648 | orchestrator | Friday 19 September 2025 11:41:00 +0000 (0:00:00.078) 0:01:32.022 ****** 2025-09-19 11:42:25.823656 | orchestrator | 2025-09-19 11:42:25.823664 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:42:25.823671 | orchestrator | Friday 19 September 2025 11:41:00 +0000 (0:00:00.146) 0:01:32.168 ****** 2025-09-19 11:42:25.823679 | orchestrator | 2025-09-19 11:42:25.823687 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 11:42:25.823695 | orchestrator | Friday 19 September 2025 11:41:00 +0000 (0:00:00.125) 0:01:32.294 ****** 2025-09-19 11:42:25.823703 | orchestrator | 2025-09-19 11:42:25.823710 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-19 11:42:25.823718 | orchestrator | Friday 19 September 2025 11:41:01 +0000 (0:00:00.088) 0:01:32.383 ****** 2025-09-19 11:42:25.823726 | orchestrator | changed: [testbed-manager] 2025-09-19 11:42:25.823734 | orchestrator | 2025-09-19 11:42:25.823742 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-19 11:42:25.823749 | orchestrator | Friday 19 September 2025 11:41:14 +0000 (0:00:13.781) 0:01:46.164 ****** 2025-09-19 11:42:25.823757 | orchestrator | changed: [testbed-manager] 2025-09-19 11:42:25.823765 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:25.823772 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:42:25.823780 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:42:25.823788 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:42:25.823795 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:42:25.823803 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:42:25.823811 | orchestrator | 2025-09-19 11:42:25.823819 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-19 11:42:25.823826 | orchestrator | Friday 19 September 2025 11:41:29 +0000 (0:00:14.975) 0:02:01.139 ****** 2025-09-19 11:42:25.823834 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:25.823842 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:42:25.823849 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:42:25.823857 | orchestrator | 2025-09-19 11:42:25.823865 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-19 11:42:25.823873 | orchestrator | Friday 19 September 2025 11:41:35 +0000 (0:00:05.608) 0:02:06.748 ****** 2025-09-19 11:42:25.823880 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:42:25.823888 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:25.823895 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:42:25.823903 | orchestrator | 2025-09-19 11:42:25.823911 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-19 11:42:25.823919 | orchestrator | Friday 19 September 2025 11:41:40 +0000 (0:00:05.130) 0:02:11.878 ****** 2025-09-19 11:42:25.823927 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:42:25.823935 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:42:25.823946 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:25.823954 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:42:25.823962 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:42:25.823969 | orchestrator | changed: [testbed-manager] 2025-09-19 11:42:25.823977 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:42:25.823985 | orchestrator | 2025-09-19 11:42:25.823993 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-19 11:42:25.824001 | orchestrator | Friday 19 September 2025 11:41:54 +0000 (0:00:13.639) 0:02:25.518 ****** 2025-09-19 11:42:25.824008 | orchestrator | changed: [testbed-manager] 2025-09-19 11:42:25.824016 | orchestrator | 2025-09-19 11:42:25.824024 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-19 11:42:25.824037 | orchestrator | Friday 19 September 2025 11:42:01 +0000 (0:00:06.929) 0:02:32.447 ****** 2025-09-19 11:42:25.824045 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:25.824052 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:42:25.824060 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:42:25.824068 | orchestrator | 2025-09-19 11:42:25.824075 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-19 11:42:25.824083 | orchestrator | Friday 19 September 2025 11:42:06 +0000 (0:00:05.828) 0:02:38.276 ****** 2025-09-19 11:42:25.824091 | orchestrator | changed: [testbed-manager] 2025-09-19 11:42:25.824098 | orchestrator | 2025-09-19 11:42:25.824106 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-19 11:42:25.824114 | orchestrator | Friday 19 September 2025 11:42:11 +0000 (0:00:04.464) 0:02:42.740 ****** 2025-09-19 11:42:25.824121 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:42:25.824129 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:42:25.824137 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:42:25.824144 | orchestrator | 2025-09-19 11:42:25.824152 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:42:25.824160 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 11:42:25.824169 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 11:42:25.824177 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 11:42:25.824184 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 11:42:25.824192 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 11:42:25.824204 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 11:42:25.824212 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 11:42:25.824219 | orchestrator | 2025-09-19 11:42:25.824227 | orchestrator | 2025-09-19 11:42:25.824235 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:42:25.824243 | orchestrator | Friday 19 September 2025 11:42:22 +0000 (0:00:10.999) 0:02:53.740 ****** 2025-09-19 11:42:25.824263 | orchestrator | =============================================================================== 2025-09-19 11:42:25.824271 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.52s 2025-09-19 11:42:25.824278 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.59s 2025-09-19 11:42:25.824286 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.98s 2025-09-19 11:42:25.824294 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.78s 2025-09-19 11:42:25.824301 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.64s 2025-09-19 11:42:25.824309 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.00s 2025-09-19 11:42:25.824317 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.93s 2025-09-19 11:42:25.824324 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.10s 2025-09-19 11:42:25.824332 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.83s 2025-09-19 11:42:25.824340 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.72s 2025-09-19 11:42:25.824347 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.61s 2025-09-19 11:42:25.824361 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.13s 2025-09-19 11:42:25.824369 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.46s 2025-09-19 11:42:25.824377 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.27s 2025-09-19 11:42:25.824384 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.95s 2025-09-19 11:42:25.824392 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.75s 2025-09-19 11:42:25.824400 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.84s 2025-09-19 11:42:25.824407 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.30s 2025-09-19 11:42:25.824415 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.10s 2025-09-19 11:42:25.824427 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.10s 2025-09-19 11:42:25.824435 | orchestrator | 2025-09-19 11:42:25 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:25.824443 | orchestrator | 2025-09-19 11:42:25 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state STARTED 2025-09-19 11:42:25.824451 | orchestrator | 2025-09-19 11:42:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:28.868980 | orchestrator | 2025-09-19 11:42:28 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:42:28.870063 | orchestrator | 2025-09-19 11:42:28 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:28.871425 | orchestrator | 2025-09-19 11:42:28 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:42:28.872646 | orchestrator | 2025-09-19 11:42:28 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:28.874789 | orchestrator | 2025-09-19 11:42:28 | INFO  | Task 1804ba28-62a2-4b80-84e6-13d759cb7728 is in state SUCCESS 2025-09-19 11:42:28.876577 | orchestrator | 2025-09-19 11:42:28.876622 | orchestrator | 2025-09-19 11:42:28.876634 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:42:28.876645 | orchestrator | 2025-09-19 11:42:28.876655 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:42:28.876815 | orchestrator | Friday 19 September 2025 11:39:36 +0000 (0:00:00.269) 0:00:00.269 ****** 2025-09-19 11:42:28.876832 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:42:28.876843 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:42:28.876853 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:42:28.876862 | orchestrator | 2025-09-19 11:42:28.876871 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:42:28.876898 | orchestrator | Friday 19 September 2025 11:39:36 +0000 (0:00:00.347) 0:00:00.617 ****** 2025-09-19 11:42:28.876908 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-19 11:42:28.876918 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-19 11:42:28.876927 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-19 11:42:28.876937 | orchestrator | 2025-09-19 11:42:28.876946 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-19 11:42:28.876956 | orchestrator | 2025-09-19 11:42:28.876965 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 11:42:28.876975 | orchestrator | Friday 19 September 2025 11:39:37 +0000 (0:00:00.452) 0:00:01.069 ****** 2025-09-19 11:42:28.877001 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:42:28.877012 | orchestrator | 2025-09-19 11:42:28.877021 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-19 11:42:28.877031 | orchestrator | Friday 19 September 2025 11:39:37 +0000 (0:00:00.529) 0:00:01.599 ****** 2025-09-19 11:42:28.877061 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-19 11:42:28.877071 | orchestrator | 2025-09-19 11:42:28.877081 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-19 11:42:28.877090 | orchestrator | Friday 19 September 2025 11:39:49 +0000 (0:00:11.593) 0:00:13.193 ****** 2025-09-19 11:42:28.877100 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-19 11:42:28.877110 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-19 11:42:28.877119 | orchestrator | 2025-09-19 11:42:28.877128 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-19 11:42:28.877138 | orchestrator | Friday 19 September 2025 11:39:55 +0000 (0:00:06.232) 0:00:19.425 ****** 2025-09-19 11:42:28.877148 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-19 11:42:28.877158 | orchestrator | 2025-09-19 11:42:28.877167 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-19 11:42:28.877177 | orchestrator | Friday 19 September 2025 11:39:58 +0000 (0:00:03.241) 0:00:22.667 ****** 2025-09-19 11:42:28.877187 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:42:28.877196 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-19 11:42:28.877206 | orchestrator | 2025-09-19 11:42:28.877215 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-19 11:42:28.877225 | orchestrator | Friday 19 September 2025 11:40:02 +0000 (0:00:03.498) 0:00:26.165 ****** 2025-09-19 11:42:28.877234 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:42:28.877244 | orchestrator | 2025-09-19 11:42:28.877275 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-19 11:42:28.877285 | orchestrator | Friday 19 September 2025 11:40:05 +0000 (0:00:03.655) 0:00:29.821 ****** 2025-09-19 11:42:28.877294 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-19 11:42:28.877304 | orchestrator | 2025-09-19 11:42:28.877313 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-19 11:42:28.877323 | orchestrator | Friday 19 September 2025 11:40:10 +0000 (0:00:04.456) 0:00:34.278 ****** 2025-09-19 11:42:28.877351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:42:28.877368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:42:28.877425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:42:28.877438 | orchestrator | 2025-09-19 11:42:28.877450 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 11:42:28.877461 | orchestrator | Friday 19 September 2025 11:40:14 +0000 (0:00:03.818) 0:00:38.097 ****** 2025-09-19 11:42:28.877479 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:42:28.877491 | orchestrator | 2025-09-19 11:42:28.877502 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-19 11:42:28.877512 | orchestrator | Friday 19 September 2025 11:40:14 +0000 (0:00:00.592) 0:00:38.689 ****** 2025-09-19 11:42:28.877532 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:28.877543 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:42:28.877553 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:42:28.877564 | orchestrator | 2025-09-19 11:42:28.877575 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-19 11:42:28.877586 | orchestrator | Friday 19 September 2025 11:40:18 +0000 (0:00:03.879) 0:00:42.569 ****** 2025-09-19 11:42:28.877597 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:42:28.877608 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:42:28.877617 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:42:28.877627 | orchestrator | 2025-09-19 11:42:28.877636 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-19 11:42:28.877646 | orchestrator | Friday 19 September 2025 11:40:20 +0000 (0:00:01.758) 0:00:44.327 ****** 2025-09-19 11:42:28.877666 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:42:28.877676 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:42:28.877686 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:42:28.877695 | orchestrator | 2025-09-19 11:42:28.877705 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-19 11:42:28.877714 | orchestrator | Friday 19 September 2025 11:40:21 +0000 (0:00:01.428) 0:00:45.756 ****** 2025-09-19 11:42:28.877724 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:42:28.877733 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:42:28.877743 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:42:28.877752 | orchestrator | 2025-09-19 11:42:28.877761 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-19 11:42:28.877771 | orchestrator | Friday 19 September 2025 11:40:22 +0000 (0:00:01.051) 0:00:46.807 ****** 2025-09-19 11:42:28.877780 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:28.877790 | orchestrator | 2025-09-19 11:42:28.877799 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-19 11:42:28.877809 | orchestrator | Friday 19 September 2025 11:40:23 +0000 (0:00:00.136) 0:00:46.943 ****** 2025-09-19 11:42:28.877818 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:28.877828 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:28.877837 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:28.877847 | orchestrator | 2025-09-19 11:42:28.877856 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 11:42:28.877865 | orchestrator | Friday 19 September 2025 11:40:23 +0000 (0:00:00.295) 0:00:47.239 ****** 2025-09-19 11:42:28.877875 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:42:28.877884 | orchestrator | 2025-09-19 11:42:28.877894 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-19 11:42:28.877903 | orchestrator | Friday 19 September 2025 11:40:23 +0000 (0:00:00.546) 0:00:47.785 ****** 2025-09-19 11:42:28.877919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:42:28.877942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:42:28.877954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:42:28.877970 | orchestrator | 2025-09-19 11:42:28.877980 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-19 11:42:28.877990 | orchestrator | Friday 19 September 2025 11:40:28 +0000 (0:00:04.131) 0:00:51.916 ****** 2025-09-19 11:42:28.878011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:42:28.878076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:42:28.878094 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:28.878104 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:28.878123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:42:28.878134 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:28.878144 | orchestrator | 2025-09-19 11:42:28.878153 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-19 11:42:28.878163 | orchestrator | Friday 19 September 2025 11:40:30 +0000 (0:00:02.577) 0:00:54.494 ****** 2025-09-19 11:42:28.878178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:42:28.878196 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:28.878212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:42:28.878223 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:28.878237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 11:42:28.878248 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:28.878281 | orchestrator | 2025-09-19 11:42:28.878291 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-19 11:42:28.878301 | orchestrator | Friday 19 September 2025 11:40:33 +0000 (0:00:03.122) 0:00:57.616 ****** 2025-09-19 11:42:28.878310 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:28.878320 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:28.878335 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:28.878345 | orchestrator | 2025-09-19 11:42:28.878354 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-19 11:42:28.878364 | orchestrator | Friday 19 September 2025 11:40:37 +0000 (0:00:03.879) 0:01:01.496 ****** 2025-09-19 11:42:28.878380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:42:28.878397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:42:28.878408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:42:28.878424 | orchestrator | 2025-09-19 11:42:28.878434 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-19 11:42:28.878444 | orchestrator | Friday 19 September 2025 11:40:41 +0000 (0:00:03.891) 0:01:05.388 ****** 2025-09-19 11:42:28.878453 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:42:28.878462 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:42:28.878472 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:28.878482 | orchestrator | 2025-09-19 11:42:28.878492 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-19 11:42:28.878506 | orchestrator | Friday 19 September 2025 11:40:47 +0000 (0:00:06.275) 0:01:11.663 ****** 2025-09-19 11:42:28.878516 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:28.878526 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:28.878536 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:28.878545 | orchestrator | 2025-09-19 11:42:28.878555 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-19 11:42:28.878564 | orchestrator | Friday 19 September 2025 11:40:52 +0000 (0:00:04.714) 0:01:16.377 ****** 2025-09-19 11:42:28.878574 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:28.878583 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:28.878592 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:28.878602 | orchestrator | 2025-09-19 11:42:28.878612 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-19 11:42:28.878621 | orchestrator | Friday 19 September 2025 11:40:57 +0000 (0:00:05.301) 0:01:21.679 ****** 2025-09-19 11:42:28.878631 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:28.878640 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:28.878650 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:28.878659 | orchestrator | 2025-09-19 11:42:28.878669 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-19 11:42:28.878678 | orchestrator | Friday 19 September 2025 11:41:01 +0000 (0:00:03.554) 0:01:25.234 ****** 2025-09-19 11:42:28.878688 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:28.878697 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:28.878707 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:28.878716 | orchestrator | 2025-09-19 11:42:28.878730 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-19 11:42:28.878739 | orchestrator | Friday 19 September 2025 11:41:05 +0000 (0:00:03.837) 0:01:29.071 ****** 2025-09-19 11:42:28.878749 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:28.878764 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:28.878773 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:28.878783 | orchestrator | 2025-09-19 11:42:28.878792 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-19 11:42:28.878802 | orchestrator | Friday 19 September 2025 11:41:05 +0000 (0:00:00.322) 0:01:29.394 ****** 2025-09-19 11:42:28.878812 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 11:42:28.878821 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:28.878831 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 11:42:28.878840 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:28.878850 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 11:42:28.878859 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:28.878869 | orchestrator | 2025-09-19 11:42:28.878878 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-19 11:42:28.878888 | orchestrator | Friday 19 September 2025 11:41:10 +0000 (0:00:04.808) 0:01:34.203 ****** 2025-09-19 11:42:28.878898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:42:28.878921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:42:28.878941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 11:42:28.878952 | orchestrator | 2025-09-19 11:42:28.878962 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 11:42:28.878971 | orchestrator | Friday 19 September 2025 11:41:16 +0000 (0:00:06.556) 0:01:40.759 ****** 2025-09-19 11:42:28.878981 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:42:28.878990 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:42:28.879000 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:42:28.879009 | orchestrator | 2025-09-19 11:42:28.879018 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-19 11:42:28.879028 | orchestrator | Friday 19 September 2025 11:41:17 +0000 (0:00:00.669) 0:01:41.429 ****** 2025-09-19 11:42:28.879037 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:28.879047 | orchestrator | 2025-09-19 11:42:28.879056 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-19 11:42:28.879066 | orchestrator | Friday 19 September 2025 11:41:19 +0000 (0:00:02.394) 0:01:43.824 ****** 2025-09-19 11:42:28.879075 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:28.879084 | orchestrator | 2025-09-19 11:42:28.879094 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-19 11:42:28.879103 | orchestrator | Friday 19 September 2025 11:41:22 +0000 (0:00:02.199) 0:01:46.024 ****** 2025-09-19 11:42:28.879112 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:28.879122 | orchestrator | 2025-09-19 11:42:28.879131 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-19 11:42:28.879146 | orchestrator | Friday 19 September 2025 11:41:24 +0000 (0:00:02.154) 0:01:48.178 ****** 2025-09-19 11:42:28.879156 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:28.879171 | orchestrator | 2025-09-19 11:42:28.879181 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-19 11:42:28.879191 | orchestrator | Friday 19 September 2025 11:41:53 +0000 (0:00:28.744) 0:02:16.922 ****** 2025-09-19 11:42:28.879200 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:28.879209 | orchestrator | 2025-09-19 11:42:28.879219 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 11:42:28.879229 | orchestrator | Friday 19 September 2025 11:41:55 +0000 (0:00:02.253) 0:02:19.175 ****** 2025-09-19 11:42:28.879238 | orchestrator | 2025-09-19 11:42:28.879248 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 11:42:28.879305 | orchestrator | Friday 19 September 2025 11:41:55 +0000 (0:00:00.238) 0:02:19.414 ****** 2025-09-19 11:42:28.879315 | orchestrator | 2025-09-19 11:42:28.879325 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 11:42:28.879334 | orchestrator | Friday 19 September 2025 11:41:55 +0000 (0:00:00.063) 0:02:19.478 ****** 2025-09-19 11:42:28.879344 | orchestrator | 2025-09-19 11:42:28.879354 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-19 11:42:28.879363 | orchestrator | Friday 19 September 2025 11:41:55 +0000 (0:00:00.063) 0:02:19.542 ****** 2025-09-19 11:42:28.879373 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:42:28.879382 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:42:28.879396 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:42:28.879406 | orchestrator | 2025-09-19 11:42:28.879416 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:42:28.879427 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 11:42:28.879438 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 11:42:28.879448 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 11:42:28.879457 | orchestrator | 2025-09-19 11:42:28.879467 | orchestrator | 2025-09-19 11:42:28.879476 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:42:28.879486 | orchestrator | Friday 19 September 2025 11:42:27 +0000 (0:00:31.621) 0:02:51.163 ****** 2025-09-19 11:42:28.879496 | orchestrator | =============================================================================== 2025-09-19 11:42:28.879505 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.62s 2025-09-19 11:42:28.879515 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.74s 2025-09-19 11:42:28.879524 | orchestrator | service-ks-register : glance | Creating services ----------------------- 11.59s 2025-09-19 11:42:28.879534 | orchestrator | glance : Check glance containers ---------------------------------------- 6.56s 2025-09-19 11:42:28.879543 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.28s 2025-09-19 11:42:28.879553 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.23s 2025-09-19 11:42:28.879562 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.30s 2025-09-19 11:42:28.879571 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.81s 2025-09-19 11:42:28.879581 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.71s 2025-09-19 11:42:28.879590 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.46s 2025-09-19 11:42:28.879600 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.13s 2025-09-19 11:42:28.879609 | orchestrator | glance : Copying over config.json files for services -------------------- 3.89s 2025-09-19 11:42:28.879619 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.88s 2025-09-19 11:42:28.879628 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.88s 2025-09-19 11:42:28.879646 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.84s 2025-09-19 11:42:28.879655 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.82s 2025-09-19 11:42:28.879665 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.66s 2025-09-19 11:42:28.879674 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.55s 2025-09-19 11:42:28.879684 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.50s 2025-09-19 11:42:28.879693 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.24s 2025-09-19 11:42:28.879703 | orchestrator | 2025-09-19 11:42:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:31.929405 | orchestrator | 2025-09-19 11:42:31 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:42:31.930433 | orchestrator | 2025-09-19 11:42:31 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:31.932684 | orchestrator | 2025-09-19 11:42:31 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:42:31.934327 | orchestrator | 2025-09-19 11:42:31 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:31.934754 | orchestrator | 2025-09-19 11:42:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:34.971830 | orchestrator | 2025-09-19 11:42:34 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:42:34.973068 | orchestrator | 2025-09-19 11:42:34 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:34.974840 | orchestrator | 2025-09-19 11:42:34 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:42:34.976021 | orchestrator | 2025-09-19 11:42:34 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:34.976167 | orchestrator | 2025-09-19 11:42:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:38.030434 | orchestrator | 2025-09-19 11:42:38 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:42:38.030537 | orchestrator | 2025-09-19 11:42:38 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:38.031462 | orchestrator | 2025-09-19 11:42:38 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:42:38.035181 | orchestrator | 2025-09-19 11:42:38 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:38.035206 | orchestrator | 2025-09-19 11:42:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:41.086332 | orchestrator | 2025-09-19 11:42:41 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:42:41.087143 | orchestrator | 2025-09-19 11:42:41 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:41.088582 | orchestrator | 2025-09-19 11:42:41 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:42:41.090083 | orchestrator | 2025-09-19 11:42:41 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:41.090517 | orchestrator | 2025-09-19 11:42:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:44.128852 | orchestrator | 2025-09-19 11:42:44 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:42:44.131570 | orchestrator | 2025-09-19 11:42:44 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:44.135212 | orchestrator | 2025-09-19 11:42:44 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:42:44.137859 | orchestrator | 2025-09-19 11:42:44 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:44.137891 | orchestrator | 2025-09-19 11:42:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:47.179071 | orchestrator | 2025-09-19 11:42:47 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:42:47.179447 | orchestrator | 2025-09-19 11:42:47 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:47.180038 | orchestrator | 2025-09-19 11:42:47 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:42:47.181093 | orchestrator | 2025-09-19 11:42:47 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:47.181117 | orchestrator | 2025-09-19 11:42:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:50.226752 | orchestrator | 2025-09-19 11:42:50 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:42:50.228489 | orchestrator | 2025-09-19 11:42:50 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:50.231007 | orchestrator | 2025-09-19 11:42:50 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:42:50.233651 | orchestrator | 2025-09-19 11:42:50 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:50.233699 | orchestrator | 2025-09-19 11:42:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:53.277705 | orchestrator | 2025-09-19 11:42:53 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:42:53.279006 | orchestrator | 2025-09-19 11:42:53 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:53.280501 | orchestrator | 2025-09-19 11:42:53 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:42:53.282166 | orchestrator | 2025-09-19 11:42:53 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:53.282193 | orchestrator | 2025-09-19 11:42:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:56.326591 | orchestrator | 2025-09-19 11:42:56 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:42:56.328315 | orchestrator | 2025-09-19 11:42:56 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:56.329823 | orchestrator | 2025-09-19 11:42:56 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:42:56.331627 | orchestrator | 2025-09-19 11:42:56 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:56.331653 | orchestrator | 2025-09-19 11:42:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:42:59.373300 | orchestrator | 2025-09-19 11:42:59 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:42:59.374440 | orchestrator | 2025-09-19 11:42:59 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:42:59.376901 | orchestrator | 2025-09-19 11:42:59 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:42:59.378862 | orchestrator | 2025-09-19 11:42:59 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:42:59.378994 | orchestrator | 2025-09-19 11:42:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:02.420864 | orchestrator | 2025-09-19 11:43:02 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:02.422226 | orchestrator | 2025-09-19 11:43:02 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:02.425202 | orchestrator | 2025-09-19 11:43:02 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:02.427423 | orchestrator | 2025-09-19 11:43:02 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:02.427481 | orchestrator | 2025-09-19 11:43:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:05.471136 | orchestrator | 2025-09-19 11:43:05 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:05.471226 | orchestrator | 2025-09-19 11:43:05 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:05.472125 | orchestrator | 2025-09-19 11:43:05 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:05.472899 | orchestrator | 2025-09-19 11:43:05 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:05.472935 | orchestrator | 2025-09-19 11:43:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:08.506171 | orchestrator | 2025-09-19 11:43:08 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:08.506801 | orchestrator | 2025-09-19 11:43:08 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:08.508414 | orchestrator | 2025-09-19 11:43:08 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:08.509125 | orchestrator | 2025-09-19 11:43:08 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:08.509414 | orchestrator | 2025-09-19 11:43:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:11.547398 | orchestrator | 2025-09-19 11:43:11 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:11.547799 | orchestrator | 2025-09-19 11:43:11 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:11.550600 | orchestrator | 2025-09-19 11:43:11 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:11.551770 | orchestrator | 2025-09-19 11:43:11 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:11.551812 | orchestrator | 2025-09-19 11:43:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:14.586065 | orchestrator | 2025-09-19 11:43:14 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:14.587065 | orchestrator | 2025-09-19 11:43:14 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:14.588436 | orchestrator | 2025-09-19 11:43:14 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:14.589343 | orchestrator | 2025-09-19 11:43:14 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:14.589398 | orchestrator | 2025-09-19 11:43:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:17.620141 | orchestrator | 2025-09-19 11:43:17 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:17.620410 | orchestrator | 2025-09-19 11:43:17 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:17.622409 | orchestrator | 2025-09-19 11:43:17 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:17.623180 | orchestrator | 2025-09-19 11:43:17 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:17.623205 | orchestrator | 2025-09-19 11:43:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:20.649329 | orchestrator | 2025-09-19 11:43:20 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:20.649721 | orchestrator | 2025-09-19 11:43:20 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:20.650630 | orchestrator | 2025-09-19 11:43:20 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:20.651646 | orchestrator | 2025-09-19 11:43:20 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:20.651715 | orchestrator | 2025-09-19 11:43:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:23.686612 | orchestrator | 2025-09-19 11:43:23 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:23.688806 | orchestrator | 2025-09-19 11:43:23 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:23.690100 | orchestrator | 2025-09-19 11:43:23 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:23.690962 | orchestrator | 2025-09-19 11:43:23 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:23.691073 | orchestrator | 2025-09-19 11:43:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:26.727375 | orchestrator | 2025-09-19 11:43:26 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:26.730273 | orchestrator | 2025-09-19 11:43:26 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:26.730354 | orchestrator | 2025-09-19 11:43:26 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:26.736597 | orchestrator | 2025-09-19 11:43:26 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:26.736643 | orchestrator | 2025-09-19 11:43:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:29.770276 | orchestrator | 2025-09-19 11:43:29 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:29.770414 | orchestrator | 2025-09-19 11:43:29 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:29.771469 | orchestrator | 2025-09-19 11:43:29 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:29.771947 | orchestrator | 2025-09-19 11:43:29 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:29.772072 | orchestrator | 2025-09-19 11:43:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:32.801913 | orchestrator | 2025-09-19 11:43:32 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:32.804113 | orchestrator | 2025-09-19 11:43:32 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:32.804556 | orchestrator | 2025-09-19 11:43:32 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:32.805886 | orchestrator | 2025-09-19 11:43:32 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:32.805912 | orchestrator | 2025-09-19 11:43:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:35.836865 | orchestrator | 2025-09-19 11:43:35 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:35.836959 | orchestrator | 2025-09-19 11:43:35 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:35.838326 | orchestrator | 2025-09-19 11:43:35 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:35.838941 | orchestrator | 2025-09-19 11:43:35 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:35.838975 | orchestrator | 2025-09-19 11:43:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:38.868625 | orchestrator | 2025-09-19 11:43:38 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:38.868934 | orchestrator | 2025-09-19 11:43:38 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:38.870254 | orchestrator | 2025-09-19 11:43:38 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:38.871589 | orchestrator | 2025-09-19 11:43:38 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:38.871622 | orchestrator | 2025-09-19 11:43:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:41.895534 | orchestrator | 2025-09-19 11:43:41 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:41.897407 | orchestrator | 2025-09-19 11:43:41 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:41.897919 | orchestrator | 2025-09-19 11:43:41 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:41.898595 | orchestrator | 2025-09-19 11:43:41 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:41.898701 | orchestrator | 2025-09-19 11:43:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:44.924985 | orchestrator | 2025-09-19 11:43:44 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:44.925303 | orchestrator | 2025-09-19 11:43:44 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:44.926122 | orchestrator | 2025-09-19 11:43:44 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:44.926943 | orchestrator | 2025-09-19 11:43:44 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:44.926981 | orchestrator | 2025-09-19 11:43:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:47.956853 | orchestrator | 2025-09-19 11:43:47 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:47.958270 | orchestrator | 2025-09-19 11:43:47 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:47.958783 | orchestrator | 2025-09-19 11:43:47 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:47.959594 | orchestrator | 2025-09-19 11:43:47 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state STARTED 2025-09-19 11:43:47.959613 | orchestrator | 2025-09-19 11:43:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:50.985774 | orchestrator | 2025-09-19 11:43:50 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:50.986088 | orchestrator | 2025-09-19 11:43:50 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:51.034276 | orchestrator | 2025-09-19 11:43:51.034367 | orchestrator | 2025-09-19 11:43:51.034381 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:43:51.034394 | orchestrator | 2025-09-19 11:43:51.034406 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:43:51.034418 | orchestrator | Friday 19 September 2025 11:40:09 +0000 (0:00:00.265) 0:00:00.265 ****** 2025-09-19 11:43:51.034429 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:43:51.034442 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:43:51.034453 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:43:51.034488 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:43:51.034499 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:43:51.034509 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:43:51.034520 | orchestrator | 2025-09-19 11:43:51.034531 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:43:51.034542 | orchestrator | Friday 19 September 2025 11:40:10 +0000 (0:00:00.676) 0:00:00.942 ****** 2025-09-19 11:43:51.034554 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-19 11:43:51.034565 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-19 11:43:51.034576 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-19 11:43:51.034586 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-19 11:43:51.034597 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-19 11:43:51.034608 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-19 11:43:51.034618 | orchestrator | 2025-09-19 11:43:51.034629 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-19 11:43:51.034639 | orchestrator | 2025-09-19 11:43:51.034650 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 11:43:51.034661 | orchestrator | Friday 19 September 2025 11:40:10 +0000 (0:00:00.724) 0:00:01.667 ****** 2025-09-19 11:43:51.034672 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:43:51.035325 | orchestrator | 2025-09-19 11:43:51.035343 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-19 11:43:51.035354 | orchestrator | Friday 19 September 2025 11:40:12 +0000 (0:00:01.558) 0:00:03.225 ****** 2025-09-19 11:43:51.035365 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-19 11:43:51.035376 | orchestrator | 2025-09-19 11:43:51.035386 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-19 11:43:51.035397 | orchestrator | Friday 19 September 2025 11:40:16 +0000 (0:00:03.736) 0:00:06.962 ****** 2025-09-19 11:43:51.035408 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-19 11:43:51.035420 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-19 11:43:51.035431 | orchestrator | 2025-09-19 11:43:51.035441 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-19 11:43:51.035452 | orchestrator | Friday 19 September 2025 11:40:23 +0000 (0:00:07.102) 0:00:14.064 ****** 2025-09-19 11:43:51.035463 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:43:51.035474 | orchestrator | 2025-09-19 11:43:51.035485 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-19 11:43:51.035495 | orchestrator | Friday 19 September 2025 11:40:26 +0000 (0:00:03.488) 0:00:17.553 ****** 2025-09-19 11:43:51.035506 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:43:51.035516 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-19 11:43:51.035527 | orchestrator | 2025-09-19 11:43:51.035538 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-19 11:43:51.035549 | orchestrator | Friday 19 September 2025 11:40:31 +0000 (0:00:04.338) 0:00:21.891 ****** 2025-09-19 11:43:51.035968 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:43:51.035991 | orchestrator | 2025-09-19 11:43:51.036028 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-19 11:43:51.036047 | orchestrator | Friday 19 September 2025 11:40:34 +0000 (0:00:03.417) 0:00:25.309 ****** 2025-09-19 11:43:51.036064 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-19 11:43:51.036081 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-19 11:43:51.036100 | orchestrator | 2025-09-19 11:43:51.036120 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-19 11:43:51.036158 | orchestrator | Friday 19 September 2025 11:40:42 +0000 (0:00:08.069) 0:00:33.378 ****** 2025-09-19 11:43:51.036303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.036325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.036338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.036350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.036369 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.036419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.036434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.036446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.036458 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.036471 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.036488 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.036531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.036545 | orchestrator | 2025-09-19 11:43:51.036556 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 11:43:51.036568 | orchestrator | Friday 19 September 2025 11:40:45 +0000 (0:00:02.809) 0:00:36.187 ****** 2025-09-19 11:43:51.036581 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:51.036593 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:51.036606 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:51.036619 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:43:51.036631 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:43:51.036643 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:43:51.036656 | orchestrator | 2025-09-19 11:43:51.036668 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 11:43:51.036681 | orchestrator | Friday 19 September 2025 11:40:46 +0000 (0:00:00.722) 0:00:36.910 ****** 2025-09-19 11:43:51.036693 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:51.036706 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:51.036718 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:51.036731 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:43:51.036744 | orchestrator | 2025-09-19 11:43:51.036756 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-19 11:43:51.036769 | orchestrator | Friday 19 September 2025 11:40:46 +0000 (0:00:00.931) 0:00:37.841 ****** 2025-09-19 11:43:51.036782 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-19 11:43:51.036795 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-19 11:43:51.036808 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-19 11:43:51.036821 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-19 11:43:51.036833 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-19 11:43:51.036846 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-19 11:43:51.036859 | orchestrator | 2025-09-19 11:43:51.036871 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-19 11:43:51.036884 | orchestrator | Friday 19 September 2025 11:40:49 +0000 (0:00:02.301) 0:00:40.142 ****** 2025-09-19 11:43:51.036898 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 11:43:51.036923 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 11:43:51.036964 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 11:43:51.036977 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 11:43:51.036989 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 11:43:51.037007 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 11:43:51.037024 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 11:43:51.037060 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 11:43:51.037073 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 11:43:51.037085 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 11:43:51.037104 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 11:43:51.037120 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 11:43:51.037132 | orchestrator | 2025-09-19 11:43:51.037143 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-19 11:43:51.037154 | orchestrator | Friday 19 September 2025 11:40:53 +0000 (0:00:03.965) 0:00:44.108 ****** 2025-09-19 11:43:51.037165 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:43:51.037177 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:43:51.037187 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 11:43:51.037198 | orchestrator | 2025-09-19 11:43:51.037279 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-19 11:43:51.037301 | orchestrator | Friday 19 September 2025 11:40:55 +0000 (0:00:02.648) 0:00:46.757 ****** 2025-09-19 11:43:51.037360 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-19 11:43:51.037389 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-19 11:43:51.037410 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-19 11:43:51.037426 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 11:43:51.037443 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 11:43:51.037460 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 11:43:51.037477 | orchestrator | 2025-09-19 11:43:51.037494 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-19 11:43:51.037512 | orchestrator | Friday 19 September 2025 11:40:59 +0000 (0:00:03.770) 0:00:50.528 ****** 2025-09-19 11:43:51.037531 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-19 11:43:51.037549 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-19 11:43:51.037565 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-19 11:43:51.037576 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-19 11:43:51.037587 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-19 11:43:51.037608 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-19 11:43:51.037619 | orchestrator | 2025-09-19 11:43:51.037630 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-19 11:43:51.037641 | orchestrator | Friday 19 September 2025 11:41:00 +0000 (0:00:01.231) 0:00:51.759 ****** 2025-09-19 11:43:51.037652 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:51.037662 | orchestrator | 2025-09-19 11:43:51.037673 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-19 11:43:51.037683 | orchestrator | Friday 19 September 2025 11:41:01 +0000 (0:00:00.203) 0:00:51.963 ****** 2025-09-19 11:43:51.037694 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:51.037704 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:51.037715 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:51.037726 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:43:51.037736 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:43:51.037747 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:43:51.037757 | orchestrator | 2025-09-19 11:43:51.037768 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 11:43:51.037778 | orchestrator | Friday 19 September 2025 11:41:01 +0000 (0:00:00.690) 0:00:52.653 ****** 2025-09-19 11:43:51.037791 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:43:51.037803 | orchestrator | 2025-09-19 11:43:51.037813 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-19 11:43:51.037824 | orchestrator | Friday 19 September 2025 11:41:02 +0000 (0:00:01.067) 0:00:53.721 ****** 2025-09-19 11:43:51.037847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.037858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.037902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.037921 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.037931 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.037945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.037956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.037988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038047 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038060 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038086 | orchestrator | 2025-09-19 11:43:51.038096 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-19 11:43:51.038105 | orchestrator | Friday 19 September 2025 11:41:06 +0000 (0:00:03.298) 0:00:57.019 ****** 2025-09-19 11:43:51.038121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:43:51.038139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038149 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:51.038159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:43:51.038169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:43:51.038194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038261 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:51.038271 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:51.038280 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:43:51.038290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038315 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:43:51.038325 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038360 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:43:51.038370 | orchestrator | 2025-09-19 11:43:51.038380 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-19 11:43:51.038389 | orchestrator | Friday 19 September 2025 11:41:08 +0000 (0:00:02.264) 0:00:59.284 ****** 2025-09-19 11:43:51.038399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:43:51.038410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:43:51.038424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:43:51.038469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038478 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:51.038488 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:51.038497 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:51.038507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038532 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:43:51.038542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038574 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:43:51.038583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.038603 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:43:51.038613 | orchestrator | 2025-09-19 11:43:51.038622 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-19 11:43:51.038632 | orchestrator | Friday 19 September 2025 11:41:10 +0000 (0:00:02.299) 0:01:01.583 ****** 2025-09-19 11:43:51.038646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.038670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.038687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.038698 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038708 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038738 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038774 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038784 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.038814 | orchestrator | 2025-09-19 11:43:51.038824 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-19 11:43:51.038834 | orchestrator | Friday 19 September 2025 11:41:14 +0000 (0:00:04.081) 0:01:05.665 ****** 2025-09-19 11:43:51.038843 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 11:43:51.038853 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:43:51.038863 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 11:43:51.038872 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:43:51.038882 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 11:43:51.038891 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:43:51.038901 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 11:43:51.038910 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 11:43:51.038926 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsg2025-09-19 11:43:50 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:51.038936 | orchestrator | 2025-09-19 11:43:50 | INFO  | Task 4023b704-77aa-4d28-9603-7e44623ab3a1 is in state SUCCESS 2025-09-19 11:43:51.038946 | orchestrator | 2025-09-19 11:43:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:51.038956 | orchestrator | i.conf.j2) 2025-09-19 11:43:51.038966 | orchestrator | 2025-09-19 11:43:51.038976 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-19 11:43:51.038985 | orchestrator | Friday 19 September 2025 11:41:17 +0000 (0:00:03.138) 0:01:08.803 ****** 2025-09-19 11:43:51.038995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.039006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.039026 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.039054 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039065 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039105 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039120 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039141 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039156 | orchestrator | 2025-09-19 11:43:51.039167 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-19 11:43:51.039176 | orchestrator | Friday 19 September 2025 11:41:27 +0000 (0:00:09.378) 0:01:18.182 ****** 2025-09-19 11:43:51.039186 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:51.039195 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:51.039252 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:51.039264 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:43:51.039274 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:43:51.039284 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:43:51.039293 | orchestrator | 2025-09-19 11:43:51.039303 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-19 11:43:51.039312 | orchestrator | Friday 19 September 2025 11:41:29 +0000 (0:00:01.947) 0:01:20.129 ****** 2025-09-19 11:43:51.039327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:43:51.039338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.039354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:43:51.039366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.039384 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:51.039393 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:51.039403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 11:43:51.039413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.039423 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:51.039438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.039454 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.039464 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:43:51.039474 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.039491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.039501 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:43:51.039510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.039525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 11:43:51.039535 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:43:51.039544 | orchestrator | 2025-09-19 11:43:51.039554 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-19 11:43:51.039564 | orchestrator | Friday 19 September 2025 11:41:30 +0000 (0:00:01.297) 0:01:21.427 ****** 2025-09-19 11:43:51.039573 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:51.039582 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:51.039592 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:51.039601 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:43:51.039611 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:43:51.039620 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:43:51.039629 | orchestrator | 2025-09-19 11:43:51.039639 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-19 11:43:51.039648 | orchestrator | Friday 19 September 2025 11:41:31 +0000 (0:00:00.996) 0:01:22.423 ****** 2025-09-19 11:43:51.039664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.039685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.039695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 11:43:51.039710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039724 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039739 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039776 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039788 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 11:43:51.039812 | orchestrator | 2025-09-19 11:43:51.039820 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 11:43:51.039828 | orchestrator | Friday 19 September 2025 11:41:33 +0000 (0:00:02.061) 0:01:24.484 ****** 2025-09-19 11:43:51.039836 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:51.039843 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:43:51.039851 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:43:51.039859 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:43:51.039866 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:43:51.039874 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:43:51.039882 | orchestrator | 2025-09-19 11:43:51.039890 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-19 11:43:51.039897 | orchestrator | Friday 19 September 2025 11:41:34 +0000 (0:00:00.614) 0:01:25.098 ****** 2025-09-19 11:43:51.039905 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:43:51.039913 | orchestrator | 2025-09-19 11:43:51.039921 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-19 11:43:51.039929 | orchestrator | Friday 19 September 2025 11:41:36 +0000 (0:00:02.225) 0:01:27.324 ****** 2025-09-19 11:43:51.039937 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:43:51.039944 | orchestrator | 2025-09-19 11:43:51.039952 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-19 11:43:51.039960 | orchestrator | Friday 19 September 2025 11:41:38 +0000 (0:00:02.316) 0:01:29.641 ****** 2025-09-19 11:43:51.039968 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:43:51.039975 | orchestrator | 2025-09-19 11:43:51.039983 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 11:43:51.039991 | orchestrator | Friday 19 September 2025 11:41:56 +0000 (0:00:18.005) 0:01:47.647 ****** 2025-09-19 11:43:51.039998 | orchestrator | 2025-09-19 11:43:51.040006 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 11:43:51.040014 | orchestrator | Friday 19 September 2025 11:41:56 +0000 (0:00:00.072) 0:01:47.719 ****** 2025-09-19 11:43:51.040021 | orchestrator | 2025-09-19 11:43:51.040029 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 11:43:51.040037 | orchestrator | Friday 19 September 2025 11:41:56 +0000 (0:00:00.065) 0:01:47.784 ****** 2025-09-19 11:43:51.040044 | orchestrator | 2025-09-19 11:43:51.040052 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 11:43:51.040060 | orchestrator | Friday 19 September 2025 11:41:56 +0000 (0:00:00.071) 0:01:47.856 ****** 2025-09-19 11:43:51.040071 | orchestrator | 2025-09-19 11:43:51.040079 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 11:43:51.040090 | orchestrator | Friday 19 September 2025 11:41:57 +0000 (0:00:00.085) 0:01:47.941 ****** 2025-09-19 11:43:51.040099 | orchestrator | 2025-09-19 11:43:51.040106 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 11:43:51.040114 | orchestrator | Friday 19 September 2025 11:41:57 +0000 (0:00:00.065) 0:01:48.007 ****** 2025-09-19 11:43:51.040127 | orchestrator | 2025-09-19 11:43:51.040135 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-19 11:43:51.040143 | orchestrator | Friday 19 September 2025 11:41:57 +0000 (0:00:00.068) 0:01:48.075 ****** 2025-09-19 11:43:51.040150 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:43:51.040158 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:43:51.040166 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:43:51.040173 | orchestrator | 2025-09-19 11:43:51.040181 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-19 11:43:51.040189 | orchestrator | Friday 19 September 2025 11:42:24 +0000 (0:00:27.004) 0:02:15.079 ****** 2025-09-19 11:43:51.040197 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:43:51.040204 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:43:51.040224 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:43:51.040232 | orchestrator | 2025-09-19 11:43:51.040244 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-19 11:43:51.040258 | orchestrator | Friday 19 September 2025 11:42:34 +0000 (0:00:10.219) 0:02:25.299 ****** 2025-09-19 11:43:51.040269 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:43:51.040277 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:43:51.040284 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:43:51.040292 | orchestrator | 2025-09-19 11:43:51.040300 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-19 11:43:51.040307 | orchestrator | Friday 19 September 2025 11:43:40 +0000 (0:01:06.095) 0:03:31.395 ****** 2025-09-19 11:43:51.040315 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:43:51.040323 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:43:51.040331 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:43:51.040338 | orchestrator | 2025-09-19 11:43:51.040350 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-19 11:43:51.040359 | orchestrator | Friday 19 September 2025 11:43:48 +0000 (0:00:08.253) 0:03:39.648 ****** 2025-09-19 11:43:51.040366 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:43:51.040374 | orchestrator | 2025-09-19 11:43:51.040382 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:43:51.040390 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 11:43:51.040398 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 11:43:51.040406 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 11:43:51.040413 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 11:43:51.040421 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 11:43:51.040429 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 11:43:51.040436 | orchestrator | 2025-09-19 11:43:51.040444 | orchestrator | 2025-09-19 11:43:51.040452 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:43:51.040460 | orchestrator | Friday 19 September 2025 11:43:49 +0000 (0:00:01.074) 0:03:40.723 ****** 2025-09-19 11:43:51.040468 | orchestrator | =============================================================================== 2025-09-19 11:43:51.040475 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 66.10s 2025-09-19 11:43:51.040483 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.00s 2025-09-19 11:43:51.040497 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.01s 2025-09-19 11:43:51.040505 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.22s 2025-09-19 11:43:51.040513 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.38s 2025-09-19 11:43:51.040520 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.25s 2025-09-19 11:43:51.040528 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.07s 2025-09-19 11:43:51.040536 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.10s 2025-09-19 11:43:51.040543 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.34s 2025-09-19 11:43:51.040551 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.08s 2025-09-19 11:43:51.040559 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.97s 2025-09-19 11:43:51.040566 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.77s 2025-09-19 11:43:51.040574 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.74s 2025-09-19 11:43:51.040582 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.49s 2025-09-19 11:43:51.040589 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.42s 2025-09-19 11:43:51.040601 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.30s 2025-09-19 11:43:51.040609 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.14s 2025-09-19 11:43:51.040617 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.81s 2025-09-19 11:43:51.040624 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.65s 2025-09-19 11:43:51.040632 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.32s 2025-09-19 11:43:54.025149 | orchestrator | 2025-09-19 11:43:54 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:43:54.026509 | orchestrator | 2025-09-19 11:43:54 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:54.026577 | orchestrator | 2025-09-19 11:43:54 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:54.026960 | orchestrator | 2025-09-19 11:43:54 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:54.026986 | orchestrator | 2025-09-19 11:43:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:43:57.048759 | orchestrator | 2025-09-19 11:43:57 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:43:57.049786 | orchestrator | 2025-09-19 11:43:57 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:43:57.050401 | orchestrator | 2025-09-19 11:43:57 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:43:57.051018 | orchestrator | 2025-09-19 11:43:57 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:43:57.051054 | orchestrator | 2025-09-19 11:43:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:00.070805 | orchestrator | 2025-09-19 11:44:00 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:00.071294 | orchestrator | 2025-09-19 11:44:00 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:00.071822 | orchestrator | 2025-09-19 11:44:00 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:00.072604 | orchestrator | 2025-09-19 11:44:00 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:44:00.072638 | orchestrator | 2025-09-19 11:44:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:03.106494 | orchestrator | 2025-09-19 11:44:03 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:03.106682 | orchestrator | 2025-09-19 11:44:03 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:03.108139 | orchestrator | 2025-09-19 11:44:03 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:03.108855 | orchestrator | 2025-09-19 11:44:03 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:44:03.109061 | orchestrator | 2025-09-19 11:44:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:06.141648 | orchestrator | 2025-09-19 11:44:06 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:06.144456 | orchestrator | 2025-09-19 11:44:06 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:06.145589 | orchestrator | 2025-09-19 11:44:06 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:06.146590 | orchestrator | 2025-09-19 11:44:06 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:44:06.146622 | orchestrator | 2025-09-19 11:44:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:09.181446 | orchestrator | 2025-09-19 11:44:09 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:09.181767 | orchestrator | 2025-09-19 11:44:09 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:09.182519 | orchestrator | 2025-09-19 11:44:09 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:09.183277 | orchestrator | 2025-09-19 11:44:09 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:44:09.183308 | orchestrator | 2025-09-19 11:44:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:12.236974 | orchestrator | 2025-09-19 11:44:12 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:12.237579 | orchestrator | 2025-09-19 11:44:12 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:12.238275 | orchestrator | 2025-09-19 11:44:12 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:12.239889 | orchestrator | 2025-09-19 11:44:12 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:44:12.239915 | orchestrator | 2025-09-19 11:44:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:15.284451 | orchestrator | 2025-09-19 11:44:15 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:15.285319 | orchestrator | 2025-09-19 11:44:15 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:15.286243 | orchestrator | 2025-09-19 11:44:15 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:15.287028 | orchestrator | 2025-09-19 11:44:15 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:44:15.287055 | orchestrator | 2025-09-19 11:44:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:18.358905 | orchestrator | 2025-09-19 11:44:18 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:18.359331 | orchestrator | 2025-09-19 11:44:18 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:18.361905 | orchestrator | 2025-09-19 11:44:18 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:18.362710 | orchestrator | 2025-09-19 11:44:18 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:44:18.362739 | orchestrator | 2025-09-19 11:44:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:21.394539 | orchestrator | 2025-09-19 11:44:21 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:21.395733 | orchestrator | 2025-09-19 11:44:21 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:21.396452 | orchestrator | 2025-09-19 11:44:21 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:21.397366 | orchestrator | 2025-09-19 11:44:21 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:44:21.399866 | orchestrator | 2025-09-19 11:44:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:24.436837 | orchestrator | 2025-09-19 11:44:24 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:24.437259 | orchestrator | 2025-09-19 11:44:24 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:24.438432 | orchestrator | 2025-09-19 11:44:24 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:24.441069 | orchestrator | 2025-09-19 11:44:24 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:44:24.441149 | orchestrator | 2025-09-19 11:44:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:27.491658 | orchestrator | 2025-09-19 11:44:27 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:27.493673 | orchestrator | 2025-09-19 11:44:27 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:27.494297 | orchestrator | 2025-09-19 11:44:27 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:27.494905 | orchestrator | 2025-09-19 11:44:27 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state STARTED 2025-09-19 11:44:27.494923 | orchestrator | 2025-09-19 11:44:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:30.524815 | orchestrator | 2025-09-19 11:44:30 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:44:30.524871 | orchestrator | 2025-09-19 11:44:30 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:30.525194 | orchestrator | 2025-09-19 11:44:30 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:30.525884 | orchestrator | 2025-09-19 11:44:30 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:30.527453 | orchestrator | 2025-09-19 11:44:30 | INFO  | Task 8c8b7603-5c7f-40bb-b2a2-2eebc1a32d82 is in state SUCCESS 2025-09-19 11:44:30.528620 | orchestrator | 2025-09-19 11:44:30.528680 | orchestrator | 2025-09-19 11:44:30.528690 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:44:30.528698 | orchestrator | 2025-09-19 11:44:30.528705 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:44:30.528714 | orchestrator | Friday 19 September 2025 11:42:31 +0000 (0:00:00.285) 0:00:00.285 ****** 2025-09-19 11:44:30.528721 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:44:30.528728 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:44:30.528790 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:44:30.528800 | orchestrator | 2025-09-19 11:44:30.528807 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:44:30.528813 | orchestrator | Friday 19 September 2025 11:42:31 +0000 (0:00:00.298) 0:00:00.583 ****** 2025-09-19 11:44:30.528819 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-19 11:44:30.528840 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-19 11:44:30.528847 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-19 11:44:30.528853 | orchestrator | 2025-09-19 11:44:30.528859 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-19 11:44:30.528865 | orchestrator | 2025-09-19 11:44:30.528902 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 11:44:30.528908 | orchestrator | Friday 19 September 2025 11:42:32 +0000 (0:00:00.469) 0:00:01.053 ****** 2025-09-19 11:44:30.528914 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:44:30.528921 | orchestrator | 2025-09-19 11:44:30.528927 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-19 11:44:30.528933 | orchestrator | Friday 19 September 2025 11:42:32 +0000 (0:00:00.542) 0:00:01.595 ****** 2025-09-19 11:44:30.528940 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-19 11:44:30.528946 | orchestrator | 2025-09-19 11:44:30.528952 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-19 11:44:30.528958 | orchestrator | Friday 19 September 2025 11:42:36 +0000 (0:00:03.646) 0:00:05.242 ****** 2025-09-19 11:44:30.528964 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-19 11:44:30.528971 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-19 11:44:30.528977 | orchestrator | 2025-09-19 11:44:30.528983 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-19 11:44:30.528989 | orchestrator | Friday 19 September 2025 11:42:43 +0000 (0:00:07.069) 0:00:12.312 ****** 2025-09-19 11:44:30.528995 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:44:30.529002 | orchestrator | 2025-09-19 11:44:30.529008 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-19 11:44:30.529014 | orchestrator | Friday 19 September 2025 11:42:47 +0000 (0:00:03.595) 0:00:15.907 ****** 2025-09-19 11:44:30.529087 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:44:30.529096 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-19 11:44:30.529102 | orchestrator | 2025-09-19 11:44:30.529108 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-19 11:44:30.529115 | orchestrator | Friday 19 September 2025 11:42:51 +0000 (0:00:03.837) 0:00:19.745 ****** 2025-09-19 11:44:30.529121 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:44:30.529128 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-19 11:44:30.529135 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-19 11:44:30.529142 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-19 11:44:30.529167 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-19 11:44:30.529334 | orchestrator | 2025-09-19 11:44:30.529342 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-19 11:44:30.529348 | orchestrator | Friday 19 September 2025 11:43:07 +0000 (0:00:16.759) 0:00:36.504 ****** 2025-09-19 11:44:30.529355 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-19 11:44:30.529361 | orchestrator | 2025-09-19 11:44:30.529366 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-19 11:44:30.529372 | orchestrator | Friday 19 September 2025 11:43:12 +0000 (0:00:04.582) 0:00:41.087 ****** 2025-09-19 11:44:30.529380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.529409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.529417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.529424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.529431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.529438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.529455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.529465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.529472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.529479 | orchestrator | 2025-09-19 11:44:30.529485 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-19 11:44:30.529491 | orchestrator | Friday 19 September 2025 11:43:14 +0000 (0:00:01.793) 0:00:42.880 ****** 2025-09-19 11:44:30.529498 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-19 11:44:30.529504 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-19 11:44:30.529510 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-19 11:44:30.529516 | orchestrator | 2025-09-19 11:44:30.529523 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-19 11:44:30.529528 | orchestrator | Friday 19 September 2025 11:43:15 +0000 (0:00:01.141) 0:00:44.022 ****** 2025-09-19 11:44:30.529534 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:30.529541 | orchestrator | 2025-09-19 11:44:30.529546 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-19 11:44:30.529552 | orchestrator | Friday 19 September 2025 11:43:15 +0000 (0:00:00.253) 0:00:44.275 ****** 2025-09-19 11:44:30.529557 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:30.529563 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:30.529568 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:30.529573 | orchestrator | 2025-09-19 11:44:30.529580 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 11:44:30.529586 | orchestrator | Friday 19 September 2025 11:43:16 +0000 (0:00:01.362) 0:00:45.638 ****** 2025-09-19 11:44:30.529592 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:44:30.529598 | orchestrator | 2025-09-19 11:44:30.529608 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-19 11:44:30.529613 | orchestrator | Friday 19 September 2025 11:43:17 +0000 (0:00:00.711) 0:00:46.349 ****** 2025-09-19 11:44:30.529633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.529648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.529654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.529661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.529667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.529677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.529683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.529696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.529703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.529709 | orchestrator | 2025-09-19 11:44:30.529714 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-19 11:44:30.529720 | orchestrator | Friday 19 September 2025 11:43:21 +0000 (0:00:03.716) 0:00:50.066 ****** 2025-09-19 11:44:30.529726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:44:30.529736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.529742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.529748 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:30.529762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:44:30.529769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.529776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.529783 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:30.529789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:44:30.529798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.529804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.529810 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:30.529816 | orchestrator | 2025-09-19 11:44:30.529826 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-19 11:44:30.529832 | orchestrator | Friday 19 September 2025 11:43:23 +0000 (0:00:01.917) 0:00:51.984 ****** 2025-09-19 11:44:30.529847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:44:30.529855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.529866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.529872 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:30.529877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:44:30.529884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.529898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.529905 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:30.529911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:44:30.529922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.529929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.529935 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:30.529941 | orchestrator | 2025-09-19 11:44:30.529946 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-19 11:44:30.529952 | orchestrator | Friday 19 September 2025 11:43:24 +0000 (0:00:00.810) 0:00:52.795 ****** 2025-09-19 11:44:30.529958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.529969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-ap2025-09-19 11:44:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:30.529980 | orchestrator | i:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.529987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.529998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530082 | orchestrator | 2025-09-19 11:44:30.530088 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-19 11:44:30.530094 | orchestrator | Friday 19 September 2025 11:43:27 +0000 (0:00:03.425) 0:00:56.220 ****** 2025-09-19 11:44:30.530100 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:30.530106 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:44:30.530112 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:44:30.530118 | orchestrator | 2025-09-19 11:44:30.530124 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-19 11:44:30.530130 | orchestrator | Friday 19 September 2025 11:43:29 +0000 (0:00:02.398) 0:00:58.619 ****** 2025-09-19 11:44:30.530136 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:44:30.530143 | orchestrator | 2025-09-19 11:44:30.530159 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-19 11:44:30.530167 | orchestrator | Friday 19 September 2025 11:43:31 +0000 (0:00:01.228) 0:00:59.847 ****** 2025-09-19 11:44:30.530173 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:30.530178 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:30.530185 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:30.530190 | orchestrator | 2025-09-19 11:44:30.530197 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-19 11:44:30.530203 | orchestrator | Friday 19 September 2025 11:43:32 +0000 (0:00:00.924) 0:01:00.775 ****** 2025-09-19 11:44:30.530210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.530222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.530233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.530245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530296 | orchestrator | 2025-09-19 11:44:30.530302 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-19 11:44:30.530308 | orchestrator | Friday 19 September 2025 11:43:40 +0000 (0:00:08.813) 0:01:09.588 ****** 2025-09-19 11:44:30.530314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:44:30.530321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.530327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.530333 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:30.530343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:44:30.530355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.530363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.530369 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:30.530376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 11:44:30.530383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.530389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:44:30.530396 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:30.530402 | orchestrator | 2025-09-19 11:44:30.530408 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-19 11:44:30.530414 | orchestrator | Friday 19 September 2025 11:43:42 +0000 (0:00:01.379) 0:01:10.968 ****** 2025-09-19 11:44:30.530434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.530441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.530448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 11:44:30.530454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:44:30.530505 | orchestrator | 2025-09-19 11:44:30.530511 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 11:44:30.530518 | orchestrator | Friday 19 September 2025 11:43:45 +0000 (0:00:03.660) 0:01:14.628 ****** 2025-09-19 11:44:30.530524 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:44:30.530530 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:44:30.530536 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:44:30.530542 | orchestrator | 2025-09-19 11:44:30.530548 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-19 11:44:30.530554 | orchestrator | Friday 19 September 2025 11:43:46 +0000 (0:00:00.335) 0:01:14.964 ****** 2025-09-19 11:44:30.530559 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:30.530564 | orchestrator | 2025-09-19 11:44:30.530570 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-19 11:44:30.530576 | orchestrator | Friday 19 September 2025 11:43:48 +0000 (0:00:02.491) 0:01:17.455 ****** 2025-09-19 11:44:30.530582 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:30.530587 | orchestrator | 2025-09-19 11:44:30.530592 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-19 11:44:30.530603 | orchestrator | Friday 19 September 2025 11:43:51 +0000 (0:00:02.633) 0:01:20.089 ****** 2025-09-19 11:44:30.530609 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:30.530615 | orchestrator | 2025-09-19 11:44:30.530620 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 11:44:30.530626 | orchestrator | Friday 19 September 2025 11:44:03 +0000 (0:00:11.730) 0:01:31.820 ****** 2025-09-19 11:44:30.530632 | orchestrator | 2025-09-19 11:44:30.530637 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 11:44:30.530643 | orchestrator | Friday 19 September 2025 11:44:03 +0000 (0:00:00.066) 0:01:31.886 ****** 2025-09-19 11:44:30.530649 | orchestrator | 2025-09-19 11:44:30.530656 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 11:44:30.530662 | orchestrator | Friday 19 September 2025 11:44:03 +0000 (0:00:00.068) 0:01:31.954 ****** 2025-09-19 11:44:30.530667 | orchestrator | 2025-09-19 11:44:30.530674 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-19 11:44:30.530680 | orchestrator | Friday 19 September 2025 11:44:03 +0000 (0:00:00.059) 0:01:32.013 ****** 2025-09-19 11:44:30.530686 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:30.530692 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:44:30.530698 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:44:30.530704 | orchestrator | 2025-09-19 11:44:30.530710 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-19 11:44:30.530715 | orchestrator | Friday 19 September 2025 11:44:15 +0000 (0:00:11.871) 0:01:43.885 ****** 2025-09-19 11:44:30.530725 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:30.530731 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:44:30.530736 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:44:30.530742 | orchestrator | 2025-09-19 11:44:30.530747 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-19 11:44:30.530753 | orchestrator | Friday 19 September 2025 11:44:21 +0000 (0:00:05.888) 0:01:49.774 ****** 2025-09-19 11:44:30.530759 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:44:30.530768 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:44:30.530775 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:44:30.530781 | orchestrator | 2025-09-19 11:44:30.530788 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:44:30.530794 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 11:44:30.530802 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:44:30.530809 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:44:30.530816 | orchestrator | 2025-09-19 11:44:30.530822 | orchestrator | 2025-09-19 11:44:30.530829 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:44:30.530835 | orchestrator | Friday 19 September 2025 11:44:28 +0000 (0:00:07.165) 0:01:56.939 ****** 2025-09-19 11:44:30.530842 | orchestrator | =============================================================================== 2025-09-19 11:44:30.530848 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.76s 2025-09-19 11:44:30.530854 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.87s 2025-09-19 11:44:30.530861 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.73s 2025-09-19 11:44:30.530867 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.81s 2025-09-19 11:44:30.530873 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.17s 2025-09-19 11:44:30.530880 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.07s 2025-09-19 11:44:30.530886 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.89s 2025-09-19 11:44:30.530897 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.58s 2025-09-19 11:44:30.530904 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.84s 2025-09-19 11:44:30.530910 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.72s 2025-09-19 11:44:30.530916 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.66s 2025-09-19 11:44:30.530923 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.65s 2025-09-19 11:44:30.530929 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.60s 2025-09-19 11:44:30.530935 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.43s 2025-09-19 11:44:30.530941 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.63s 2025-09-19 11:44:30.530948 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.49s 2025-09-19 11:44:30.530955 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.40s 2025-09-19 11:44:30.530961 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.92s 2025-09-19 11:44:30.530967 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.79s 2025-09-19 11:44:30.530973 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.38s 2025-09-19 11:44:33.552204 | orchestrator | 2025-09-19 11:44:33 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:44:33.552455 | orchestrator | 2025-09-19 11:44:33 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:33.554256 | orchestrator | 2025-09-19 11:44:33 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:33.554797 | orchestrator | 2025-09-19 11:44:33 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:33.554838 | orchestrator | 2025-09-19 11:44:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:36.576923 | orchestrator | 2025-09-19 11:44:36 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:44:36.577124 | orchestrator | 2025-09-19 11:44:36 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:36.577837 | orchestrator | 2025-09-19 11:44:36 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:36.578397 | orchestrator | 2025-09-19 11:44:36 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:36.578441 | orchestrator | 2025-09-19 11:44:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:39.600779 | orchestrator | 2025-09-19 11:44:39 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:44:39.600976 | orchestrator | 2025-09-19 11:44:39 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:39.601607 | orchestrator | 2025-09-19 11:44:39 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:39.602349 | orchestrator | 2025-09-19 11:44:39 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:39.602379 | orchestrator | 2025-09-19 11:44:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:42.629392 | orchestrator | 2025-09-19 11:44:42 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:44:42.630465 | orchestrator | 2025-09-19 11:44:42 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:42.631366 | orchestrator | 2025-09-19 11:44:42 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:42.632798 | orchestrator | 2025-09-19 11:44:42 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:42.632858 | orchestrator | 2025-09-19 11:44:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:45.662414 | orchestrator | 2025-09-19 11:44:45 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:44:45.663800 | orchestrator | 2025-09-19 11:44:45 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:45.664316 | orchestrator | 2025-09-19 11:44:45 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:45.664905 | orchestrator | 2025-09-19 11:44:45 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:45.664922 | orchestrator | 2025-09-19 11:44:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:48.697196 | orchestrator | 2025-09-19 11:44:48 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:44:48.697287 | orchestrator | 2025-09-19 11:44:48 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:48.697768 | orchestrator | 2025-09-19 11:44:48 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:48.698445 | orchestrator | 2025-09-19 11:44:48 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:48.698469 | orchestrator | 2025-09-19 11:44:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:51.726686 | orchestrator | 2025-09-19 11:44:51 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:44:51.726873 | orchestrator | 2025-09-19 11:44:51 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:51.727551 | orchestrator | 2025-09-19 11:44:51 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:51.728114 | orchestrator | 2025-09-19 11:44:51 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:51.728180 | orchestrator | 2025-09-19 11:44:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:54.752865 | orchestrator | 2025-09-19 11:44:54 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:44:54.753425 | orchestrator | 2025-09-19 11:44:54 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:54.753963 | orchestrator | 2025-09-19 11:44:54 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:54.754912 | orchestrator | 2025-09-19 11:44:54 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:54.754999 | orchestrator | 2025-09-19 11:44:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:44:57.782392 | orchestrator | 2025-09-19 11:44:57 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:44:57.782808 | orchestrator | 2025-09-19 11:44:57 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:44:57.783631 | orchestrator | 2025-09-19 11:44:57 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:44:57.784388 | orchestrator | 2025-09-19 11:44:57 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:44:57.784528 | orchestrator | 2025-09-19 11:44:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:00.809252 | orchestrator | 2025-09-19 11:45:00 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:45:00.809525 | orchestrator | 2025-09-19 11:45:00 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:00.810492 | orchestrator | 2025-09-19 11:45:00 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:00.810965 | orchestrator | 2025-09-19 11:45:00 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:00.811103 | orchestrator | 2025-09-19 11:45:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:03.837247 | orchestrator | 2025-09-19 11:45:03 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:45:03.837692 | orchestrator | 2025-09-19 11:45:03 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:03.838293 | orchestrator | 2025-09-19 11:45:03 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:03.841353 | orchestrator | 2025-09-19 11:45:03 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:03.841373 | orchestrator | 2025-09-19 11:45:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:06.864469 | orchestrator | 2025-09-19 11:45:06 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:45:06.865575 | orchestrator | 2025-09-19 11:45:06 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:06.867558 | orchestrator | 2025-09-19 11:45:06 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:06.868628 | orchestrator | 2025-09-19 11:45:06 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:06.868853 | orchestrator | 2025-09-19 11:45:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:09.892611 | orchestrator | 2025-09-19 11:45:09 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state STARTED 2025-09-19 11:45:09.894693 | orchestrator | 2025-09-19 11:45:09 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:09.895492 | orchestrator | 2025-09-19 11:45:09 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:09.896411 | orchestrator | 2025-09-19 11:45:09 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:09.896436 | orchestrator | 2025-09-19 11:45:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:12.920525 | orchestrator | 2025-09-19 11:45:12 | INFO  | Task f7a9324b-84a6-4d51-8f63-2e5159d4edf5 is in state SUCCESS 2025-09-19 11:45:12.921134 | orchestrator | 2025-09-19 11:45:12 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:12.922139 | orchestrator | 2025-09-19 11:45:12 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:12.922947 | orchestrator | 2025-09-19 11:45:12 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:12.923120 | orchestrator | 2025-09-19 11:45:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:15.951852 | orchestrator | 2025-09-19 11:45:15 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:15.952974 | orchestrator | 2025-09-19 11:45:15 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:15.953706 | orchestrator | 2025-09-19 11:45:15 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:15.954290 | orchestrator | 2025-09-19 11:45:15 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:15.954312 | orchestrator | 2025-09-19 11:45:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:18.982422 | orchestrator | 2025-09-19 11:45:18 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:18.982515 | orchestrator | 2025-09-19 11:45:18 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:18.983044 | orchestrator | 2025-09-19 11:45:18 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:18.984423 | orchestrator | 2025-09-19 11:45:18 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:18.984443 | orchestrator | 2025-09-19 11:45:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:22.024114 | orchestrator | 2025-09-19 11:45:22 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:22.026853 | orchestrator | 2025-09-19 11:45:22 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:22.028840 | orchestrator | 2025-09-19 11:45:22 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:22.031461 | orchestrator | 2025-09-19 11:45:22 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:22.031500 | orchestrator | 2025-09-19 11:45:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:25.061560 | orchestrator | 2025-09-19 11:45:25 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:25.061918 | orchestrator | 2025-09-19 11:45:25 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:25.062581 | orchestrator | 2025-09-19 11:45:25 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:25.063307 | orchestrator | 2025-09-19 11:45:25 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:25.063408 | orchestrator | 2025-09-19 11:45:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:28.096114 | orchestrator | 2025-09-19 11:45:28 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:28.096200 | orchestrator | 2025-09-19 11:45:28 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:28.096213 | orchestrator | 2025-09-19 11:45:28 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:28.097333 | orchestrator | 2025-09-19 11:45:28 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:28.097357 | orchestrator | 2025-09-19 11:45:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:31.123142 | orchestrator | 2025-09-19 11:45:31 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:31.123966 | orchestrator | 2025-09-19 11:45:31 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:31.125230 | orchestrator | 2025-09-19 11:45:31 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:31.126553 | orchestrator | 2025-09-19 11:45:31 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:31.126659 | orchestrator | 2025-09-19 11:45:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:34.166950 | orchestrator | 2025-09-19 11:45:34 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:34.168347 | orchestrator | 2025-09-19 11:45:34 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:34.168882 | orchestrator | 2025-09-19 11:45:34 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:34.170773 | orchestrator | 2025-09-19 11:45:34 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:34.170824 | orchestrator | 2025-09-19 11:45:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:37.195527 | orchestrator | 2025-09-19 11:45:37 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:37.196001 | orchestrator | 2025-09-19 11:45:37 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:37.196983 | orchestrator | 2025-09-19 11:45:37 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:37.198895 | orchestrator | 2025-09-19 11:45:37 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:37.198924 | orchestrator | 2025-09-19 11:45:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:40.222396 | orchestrator | 2025-09-19 11:45:40 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:40.222477 | orchestrator | 2025-09-19 11:45:40 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:40.222946 | orchestrator | 2025-09-19 11:45:40 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:40.223770 | orchestrator | 2025-09-19 11:45:40 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:40.223797 | orchestrator | 2025-09-19 11:45:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:43.244011 | orchestrator | 2025-09-19 11:45:43 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:43.244685 | orchestrator | 2025-09-19 11:45:43 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:43.245401 | orchestrator | 2025-09-19 11:45:43 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:43.245850 | orchestrator | 2025-09-19 11:45:43 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:43.245876 | orchestrator | 2025-09-19 11:45:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:46.319810 | orchestrator | 2025-09-19 11:45:46 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:46.322398 | orchestrator | 2025-09-19 11:45:46 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:46.323814 | orchestrator | 2025-09-19 11:45:46 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:46.325673 | orchestrator | 2025-09-19 11:45:46 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:46.325696 | orchestrator | 2025-09-19 11:45:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:49.356953 | orchestrator | 2025-09-19 11:45:49 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:49.357285 | orchestrator | 2025-09-19 11:45:49 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:49.358203 | orchestrator | 2025-09-19 11:45:49 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:49.358858 | orchestrator | 2025-09-19 11:45:49 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:49.358881 | orchestrator | 2025-09-19 11:45:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:52.386933 | orchestrator | 2025-09-19 11:45:52 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:52.388198 | orchestrator | 2025-09-19 11:45:52 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:52.388792 | orchestrator | 2025-09-19 11:45:52 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:52.390699 | orchestrator | 2025-09-19 11:45:52 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:52.390725 | orchestrator | 2025-09-19 11:45:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:55.430708 | orchestrator | 2025-09-19 11:45:55 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:55.434320 | orchestrator | 2025-09-19 11:45:55 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:55.438795 | orchestrator | 2025-09-19 11:45:55 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:55.438848 | orchestrator | 2025-09-19 11:45:55 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:55.438861 | orchestrator | 2025-09-19 11:45:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:45:58.473720 | orchestrator | 2025-09-19 11:45:58 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:45:58.475211 | orchestrator | 2025-09-19 11:45:58 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:45:58.475938 | orchestrator | 2025-09-19 11:45:58 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:45:58.476690 | orchestrator | 2025-09-19 11:45:58 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:45:58.476715 | orchestrator | 2025-09-19 11:45:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:01.506156 | orchestrator | 2025-09-19 11:46:01 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:01.506230 | orchestrator | 2025-09-19 11:46:01 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:46:01.507209 | orchestrator | 2025-09-19 11:46:01 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:46:01.507876 | orchestrator | 2025-09-19 11:46:01 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:01.507897 | orchestrator | 2025-09-19 11:46:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:04.535788 | orchestrator | 2025-09-19 11:46:04 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:04.536252 | orchestrator | 2025-09-19 11:46:04 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:46:04.537673 | orchestrator | 2025-09-19 11:46:04 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:46:04.538481 | orchestrator | 2025-09-19 11:46:04 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:04.538699 | orchestrator | 2025-09-19 11:46:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:07.562321 | orchestrator | 2025-09-19 11:46:07 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:07.562625 | orchestrator | 2025-09-19 11:46:07 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:46:07.563074 | orchestrator | 2025-09-19 11:46:07 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:46:07.563742 | orchestrator | 2025-09-19 11:46:07 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:07.563764 | orchestrator | 2025-09-19 11:46:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:10.584592 | orchestrator | 2025-09-19 11:46:10 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:10.587249 | orchestrator | 2025-09-19 11:46:10 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:46:10.588557 | orchestrator | 2025-09-19 11:46:10 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:46:10.590252 | orchestrator | 2025-09-19 11:46:10 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:10.590353 | orchestrator | 2025-09-19 11:46:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:13.635499 | orchestrator | 2025-09-19 11:46:13 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:13.638907 | orchestrator | 2025-09-19 11:46:13 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:46:13.641156 | orchestrator | 2025-09-19 11:46:13 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:46:13.643550 | orchestrator | 2025-09-19 11:46:13 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:13.643827 | orchestrator | 2025-09-19 11:46:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:16.684739 | orchestrator | 2025-09-19 11:46:16 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:16.685713 | orchestrator | 2025-09-19 11:46:16 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:46:16.686683 | orchestrator | 2025-09-19 11:46:16 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:46:16.687603 | orchestrator | 2025-09-19 11:46:16 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:16.687638 | orchestrator | 2025-09-19 11:46:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:19.741968 | orchestrator | 2025-09-19 11:46:19 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:19.742421 | orchestrator | 2025-09-19 11:46:19 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:46:19.743730 | orchestrator | 2025-09-19 11:46:19 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:46:19.744520 | orchestrator | 2025-09-19 11:46:19 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:19.745381 | orchestrator | 2025-09-19 11:46:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:22.784623 | orchestrator | 2025-09-19 11:46:22 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:22.785189 | orchestrator | 2025-09-19 11:46:22 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state STARTED 2025-09-19 11:46:22.788193 | orchestrator | 2025-09-19 11:46:22 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:46:22.788869 | orchestrator | 2025-09-19 11:46:22 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:22.788897 | orchestrator | 2025-09-19 11:46:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:25.826997 | orchestrator | 2025-09-19 11:46:25 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:25.829919 | orchestrator | 2025-09-19 11:46:25 | INFO  | Task c1f781da-dc1e-4873-a2ac-9727619acc2f is in state SUCCESS 2025-09-19 11:46:25.832231 | orchestrator | 2025-09-19 11:46:25.832314 | orchestrator | 2025-09-19 11:46:25.832327 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-19 11:46:25.832373 | orchestrator | 2025-09-19 11:46:25.832385 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-19 11:46:25.832476 | orchestrator | Friday 19 September 2025 11:44:34 +0000 (0:00:00.086) 0:00:00.086 ****** 2025-09-19 11:46:25.832488 | orchestrator | changed: [localhost] 2025-09-19 11:46:25.832500 | orchestrator | 2025-09-19 11:46:25.832511 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-19 11:46:25.832522 | orchestrator | Friday 19 September 2025 11:44:35 +0000 (0:00:01.139) 0:00:01.226 ****** 2025-09-19 11:46:25.832533 | orchestrator | changed: [localhost] 2025-09-19 11:46:25.832543 | orchestrator | 2025-09-19 11:46:25.832554 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-19 11:46:25.832579 | orchestrator | Friday 19 September 2025 11:45:06 +0000 (0:00:30.825) 0:00:32.051 ****** 2025-09-19 11:46:25.832590 | orchestrator | changed: [localhost] 2025-09-19 11:46:25.832601 | orchestrator | 2025-09-19 11:46:25.832612 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:46:25.832622 | orchestrator | 2025-09-19 11:46:25.832633 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:46:25.832644 | orchestrator | Friday 19 September 2025 11:45:10 +0000 (0:00:04.345) 0:00:36.396 ****** 2025-09-19 11:46:25.832654 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:46:25.832665 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:46:25.832676 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:46:25.832687 | orchestrator | 2025-09-19 11:46:25.832698 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:46:25.832709 | orchestrator | Friday 19 September 2025 11:45:11 +0000 (0:00:00.351) 0:00:36.749 ****** 2025-09-19 11:46:25.832719 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-19 11:46:25.832730 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-19 11:46:25.832742 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-19 11:46:25.832752 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-19 11:46:25.832763 | orchestrator | 2025-09-19 11:46:25.832774 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-19 11:46:25.832785 | orchestrator | skipping: no hosts matched 2025-09-19 11:46:25.832796 | orchestrator | 2025-09-19 11:46:25.832807 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:46:25.832818 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:46:25.832830 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:46:25.832843 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:46:25.832856 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:46:25.832868 | orchestrator | 2025-09-19 11:46:25.832880 | orchestrator | 2025-09-19 11:46:25.832893 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:46:25.832905 | orchestrator | Friday 19 September 2025 11:45:11 +0000 (0:00:00.524) 0:00:37.273 ****** 2025-09-19 11:46:25.832917 | orchestrator | =============================================================================== 2025-09-19 11:46:25.832929 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 30.83s 2025-09-19 11:46:25.832942 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.35s 2025-09-19 11:46:25.832955 | orchestrator | Ensure the destination directory exists --------------------------------- 1.14s 2025-09-19 11:46:25.832984 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-09-19 11:46:25.832995 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-09-19 11:46:25.833006 | orchestrator | 2025-09-19 11:46:25.833016 | orchestrator | 2025-09-19 11:46:25.833035 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:46:25.833046 | orchestrator | 2025-09-19 11:46:25.833057 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:46:25.833068 | orchestrator | Friday 19 September 2025 11:42:27 +0000 (0:00:00.262) 0:00:00.262 ****** 2025-09-19 11:46:25.833078 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:46:25.833089 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:46:25.833100 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:46:25.833111 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:46:25.833122 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:46:25.833132 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:46:25.833143 | orchestrator | 2025-09-19 11:46:25.833154 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:46:25.833165 | orchestrator | Friday 19 September 2025 11:42:28 +0000 (0:00:00.675) 0:00:00.938 ****** 2025-09-19 11:46:25.833175 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-19 11:46:25.833187 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-19 11:46:25.833197 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-19 11:46:25.833208 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-19 11:46:25.833219 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-19 11:46:25.833230 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-19 11:46:25.833240 | orchestrator | 2025-09-19 11:46:25.833251 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-19 11:46:25.833262 | orchestrator | 2025-09-19 11:46:25.833273 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 11:46:25.833284 | orchestrator | Friday 19 September 2025 11:42:28 +0000 (0:00:00.597) 0:00:01.535 ****** 2025-09-19 11:46:25.833307 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:46:25.833319 | orchestrator | 2025-09-19 11:46:25.833330 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-19 11:46:25.833341 | orchestrator | Friday 19 September 2025 11:42:29 +0000 (0:00:01.272) 0:00:02.807 ****** 2025-09-19 11:46:25.833351 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:46:25.833362 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:46:25.833373 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:46:25.833383 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:46:25.833394 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:46:25.833405 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:46:25.833415 | orchestrator | 2025-09-19 11:46:25.833431 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-19 11:46:25.833442 | orchestrator | Friday 19 September 2025 11:42:31 +0000 (0:00:01.340) 0:00:04.147 ****** 2025-09-19 11:46:25.833453 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:46:25.833464 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:46:25.833474 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:46:25.833485 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:46:25.833495 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:46:25.833506 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:46:25.833516 | orchestrator | 2025-09-19 11:46:25.833527 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-19 11:46:25.833538 | orchestrator | Friday 19 September 2025 11:42:32 +0000 (0:00:01.102) 0:00:05.250 ****** 2025-09-19 11:46:25.833549 | orchestrator | ok: [testbed-node-0] => { 2025-09-19 11:46:25.833560 | orchestrator |  "changed": false, 2025-09-19 11:46:25.833571 | orchestrator |  "msg": "All assertions passed" 2025-09-19 11:46:25.833608 | orchestrator | } 2025-09-19 11:46:25.833621 | orchestrator | ok: [testbed-node-1] => { 2025-09-19 11:46:25.833632 | orchestrator |  "changed": false, 2025-09-19 11:46:25.833643 | orchestrator |  "msg": "All assertions passed" 2025-09-19 11:46:25.833653 | orchestrator | } 2025-09-19 11:46:25.833671 | orchestrator | ok: [testbed-node-2] => { 2025-09-19 11:46:25.833682 | orchestrator |  "changed": false, 2025-09-19 11:46:25.833692 | orchestrator |  "msg": "All assertions passed" 2025-09-19 11:46:25.833703 | orchestrator | } 2025-09-19 11:46:25.833714 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 11:46:25.833724 | orchestrator |  "changed": false, 2025-09-19 11:46:25.833735 | orchestrator |  "msg": "All assertions passed" 2025-09-19 11:46:25.833745 | orchestrator | } 2025-09-19 11:46:25.833756 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 11:46:25.833767 | orchestrator |  "changed": false, 2025-09-19 11:46:25.833777 | orchestrator |  "msg": "All assertions passed" 2025-09-19 11:46:25.833788 | orchestrator | } 2025-09-19 11:46:25.833799 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 11:46:25.833809 | orchestrator |  "changed": false, 2025-09-19 11:46:25.833820 | orchestrator |  "msg": "All assertions passed" 2025-09-19 11:46:25.833830 | orchestrator | } 2025-09-19 11:46:25.833841 | orchestrator | 2025-09-19 11:46:25.833852 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-19 11:46:25.833863 | orchestrator | Friday 19 September 2025 11:42:33 +0000 (0:00:00.792) 0:00:06.043 ****** 2025-09-19 11:46:25.833873 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.833884 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.833894 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.833905 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.833915 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.833926 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.833936 | orchestrator | 2025-09-19 11:46:25.833947 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-19 11:46:25.833993 | orchestrator | Friday 19 September 2025 11:42:33 +0000 (0:00:00.589) 0:00:06.633 ****** 2025-09-19 11:46:25.834005 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-19 11:46:25.834064 | orchestrator | 2025-09-19 11:46:25.834078 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-19 11:46:25.834089 | orchestrator | Friday 19 September 2025 11:42:37 +0000 (0:00:03.605) 0:00:10.239 ****** 2025-09-19 11:46:25.834100 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-19 11:46:25.834111 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-19 11:46:25.834122 | orchestrator | 2025-09-19 11:46:25.834133 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-19 11:46:25.834144 | orchestrator | Friday 19 September 2025 11:42:44 +0000 (0:00:06.990) 0:00:17.229 ****** 2025-09-19 11:46:25.834154 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:46:25.834165 | orchestrator | 2025-09-19 11:46:25.834176 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-19 11:46:25.834186 | orchestrator | Friday 19 September 2025 11:42:47 +0000 (0:00:03.384) 0:00:20.613 ****** 2025-09-19 11:46:25.834197 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:46:25.834207 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-19 11:46:25.834218 | orchestrator | 2025-09-19 11:46:25.834229 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-19 11:46:25.834240 | orchestrator | Friday 19 September 2025 11:42:51 +0000 (0:00:03.908) 0:00:24.522 ****** 2025-09-19 11:46:25.834250 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:46:25.834261 | orchestrator | 2025-09-19 11:46:25.834272 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-19 11:46:25.834282 | orchestrator | Friday 19 September 2025 11:42:55 +0000 (0:00:03.410) 0:00:27.933 ****** 2025-09-19 11:46:25.834293 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-19 11:46:25.834304 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-19 11:46:25.834314 | orchestrator | 2025-09-19 11:46:25.834325 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 11:46:25.834343 | orchestrator | Friday 19 September 2025 11:43:03 +0000 (0:00:08.177) 0:00:36.110 ****** 2025-09-19 11:46:25.834354 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.834364 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.834390 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.834402 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.834412 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.834423 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.834433 | orchestrator | 2025-09-19 11:46:25.834444 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-19 11:46:25.834455 | orchestrator | Friday 19 September 2025 11:43:03 +0000 (0:00:00.712) 0:00:36.822 ****** 2025-09-19 11:46:25.834465 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.834476 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.834487 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.834497 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.834508 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.834518 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.834529 | orchestrator | 2025-09-19 11:46:25.834540 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-19 11:46:25.834667 | orchestrator | Friday 19 September 2025 11:43:06 +0000 (0:00:02.391) 0:00:39.214 ****** 2025-09-19 11:46:25.834690 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:46:25.834702 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:46:25.834712 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:46:25.834723 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:46:25.834734 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:46:25.834745 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:46:25.834755 | orchestrator | 2025-09-19 11:46:25.834766 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-19 11:46:25.834777 | orchestrator | Friday 19 September 2025 11:43:07 +0000 (0:00:01.108) 0:00:40.323 ****** 2025-09-19 11:46:25.834788 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.834798 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.834809 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.834819 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.834830 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.834841 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.834851 | orchestrator | 2025-09-19 11:46:25.834862 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-19 11:46:25.834873 | orchestrator | Friday 19 September 2025 11:43:09 +0000 (0:00:01.981) 0:00:42.304 ****** 2025-09-19 11:46:25.834887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.834902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.834922 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.834949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.835026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.835042 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.835053 | orchestrator | 2025-09-19 11:46:25.835064 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-19 11:46:25.835075 | orchestrator | Friday 19 September 2025 11:43:12 +0000 (0:00:03.048) 0:00:45.353 ****** 2025-09-19 11:46:25.835095 | orchestrator | [WARNING]: Skipped 2025-09-19 11:46:25.835107 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-19 11:46:25.835118 | orchestrator | due to this access issue: 2025-09-19 11:46:25.835129 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-19 11:46:25.835140 | orchestrator | a directory 2025-09-19 11:46:25.835151 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:46:25.835162 | orchestrator | 2025-09-19 11:46:25.835173 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 11:46:25.835194 | orchestrator | Friday 19 September 2025 11:43:13 +0000 (0:00:00.881) 0:00:46.234 ****** 2025-09-19 11:46:25.835206 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:46:25.835217 | orchestrator | 2025-09-19 11:46:25.835228 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-19 11:46:25.835239 | orchestrator | Friday 19 September 2025 11:43:14 +0000 (0:00:01.490) 0:00:47.725 ****** 2025-09-19 11:46:25.835250 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.835277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.835289 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.835301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.835318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.835335 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.835347 | orchestrator | 2025-09-19 11:46:25.835358 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-19 11:46:25.835369 | orchestrator | Friday 19 September 2025 11:43:18 +0000 (0:00:03.459) 0:00:51.184 ****** 2025-09-19 11:46:25.835385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.835397 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.835408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.835426 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.835437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.835448 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.835460 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.835471 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.835492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.835505 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.835516 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.835527 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.835537 | orchestrator | 2025-09-19 11:46:25.835547 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-19 11:46:25.835562 | orchestrator | Friday 19 September 2025 11:43:21 +0000 (0:00:02.817) 0:00:54.002 ****** 2025-09-19 11:46:25.835572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.835582 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.835592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.835602 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.835617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.835628 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.835642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.835652 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.835662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.835677 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.835687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.835696 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.835706 | orchestrator | 2025-09-19 11:46:25.835716 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-19 11:46:25.835725 | orchestrator | Friday 19 September 2025 11:43:24 +0000 (0:00:03.332) 0:00:57.335 ****** 2025-09-19 11:46:25.835735 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.835744 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.835754 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.835763 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.835772 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.835782 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.835791 | orchestrator | 2025-09-19 11:46:25.835801 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-19 11:46:25.835811 | orchestrator | Friday 19 September 2025 11:43:26 +0000 (0:00:02.337) 0:00:59.672 ****** 2025-09-19 11:46:25.835820 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.835829 | orchestrator | 2025-09-19 11:46:25.835839 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-19 11:46:25.835849 | orchestrator | Friday 19 September 2025 11:43:26 +0000 (0:00:00.124) 0:00:59.797 ****** 2025-09-19 11:46:25.835858 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.835867 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.835877 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.835887 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.835896 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.835905 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.835915 | orchestrator | 2025-09-19 11:46:25.835924 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-19 11:46:25.835934 | orchestrator | Friday 19 September 2025 11:43:27 +0000 (0:00:00.771) 0:01:00.569 ****** 2025-09-19 11:46:25.835953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.835987 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.835998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.836008 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.836018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.836028 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.836038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.836048 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.836062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.836082 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.836096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.836106 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.836116 | orchestrator | 2025-09-19 11:46:25.836126 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-19 11:46:25.836135 | orchestrator | Friday 19 September 2025 11:43:30 +0000 (0:00:02.599) 0:01:03.168 ****** 2025-09-19 11:46:25.836145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.836156 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.836166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.836183 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.836203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.836213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.836223 | orchestrator | 2025-09-19 11:46:25.836233 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-19 11:46:25.836243 | orchestrator | Friday 19 September 2025 11:43:34 +0000 (0:00:03.924) 0:01:07.093 ****** 2025-09-19 11:46:25.836253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.836263 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.836288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.836299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.836309 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.836319 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.836329 | orchestrator | 2025-09-19 11:46:25.836339 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-19 11:46:25.836349 | orchestrator | Friday 19 September 2025 11:43:40 +0000 (0:00:06.770) 0:01:13.863 ****** 2025-09-19 11:46:25.836359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.836374 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.836394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.836405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.836415 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.836426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.836435 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.836445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.836466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.836477 | orchestrator | 2025-09-19 11:46:25.836487 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-19 11:46:25.836496 | orchestrator | Friday 19 September 2025 11:43:46 +0000 (0:00:05.111) 0:01:18.975 ****** 2025-09-19 11:46:25.836506 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.836515 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.836525 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.836534 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:25.836543 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:46:25.836557 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:46:25.836567 | orchestrator | 2025-09-19 11:46:25.836576 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-19 11:46:25.836586 | orchestrator | Friday 19 September 2025 11:43:48 +0000 (0:00:02.347) 0:01:21.323 ****** 2025-09-19 11:46:25.836596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.836606 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.836616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.836626 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.836636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.836651 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.836661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.836680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.836691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.836701 | orchestrator | 2025-09-19 11:46:25.836711 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-19 11:46:25.836721 | orchestrator | Friday 19 September 2025 11:43:53 +0000 (0:00:04.873) 0:01:26.196 ****** 2025-09-19 11:46:25.836730 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.836739 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.836749 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.836758 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.836768 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.836777 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.836787 | orchestrator | 2025-09-19 11:46:25.836797 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-19 11:46:25.836806 | orchestrator | Friday 19 September 2025 11:43:56 +0000 (0:00:03.139) 0:01:29.335 ****** 2025-09-19 11:46:25.836822 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.836831 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.836841 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.836851 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.836860 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.836869 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.836879 | orchestrator | 2025-09-19 11:46:25.836888 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-19 11:46:25.836898 | orchestrator | Friday 19 September 2025 11:43:59 +0000 (0:00:02.910) 0:01:32.246 ****** 2025-09-19 11:46:25.836907 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.836917 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.836926 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.836935 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.836945 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.836954 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.836978 | orchestrator | 2025-09-19 11:46:25.836988 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-19 11:46:25.836998 | orchestrator | Friday 19 September 2025 11:44:01 +0000 (0:00:01.677) 0:01:33.924 ****** 2025-09-19 11:46:25.837008 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.837018 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.837027 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.837037 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.837046 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.837056 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.837065 | orchestrator | 2025-09-19 11:46:25.837075 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-19 11:46:25.837084 | orchestrator | Friday 19 September 2025 11:44:03 +0000 (0:00:02.471) 0:01:36.395 ****** 2025-09-19 11:46:25.837094 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.837103 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.837113 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.837122 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.837132 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.837141 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.837151 | orchestrator | 2025-09-19 11:46:25.837160 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-19 11:46:25.837170 | orchestrator | Friday 19 September 2025 11:44:06 +0000 (0:00:02.644) 0:01:39.040 ****** 2025-09-19 11:46:25.837179 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.837189 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.837198 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.837208 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.837217 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.837232 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.837241 | orchestrator | 2025-09-19 11:46:25.837251 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-19 11:46:25.837261 | orchestrator | Friday 19 September 2025 11:44:08 +0000 (0:00:02.155) 0:01:41.195 ****** 2025-09-19 11:46:25.837270 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 11:46:25.837280 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.837289 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 11:46:25.837299 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.837312 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 11:46:25.837322 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.837332 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 11:46:25.837341 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.837356 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 11:46:25.837366 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.837375 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 11:46:25.837385 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.837394 | orchestrator | 2025-09-19 11:46:25.837404 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-19 11:46:25.837414 | orchestrator | Friday 19 September 2025 11:44:10 +0000 (0:00:02.510) 0:01:43.706 ****** 2025-09-19 11:46:25.837424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.837434 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.837444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.837453 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.837463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.837473 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.837495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.837512 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.837522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.837531 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.837541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.837551 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.837560 | orchestrator | 2025-09-19 11:46:25.837570 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-19 11:46:25.837580 | orchestrator | Friday 19 September 2025 11:44:12 +0000 (0:00:02.096) 0:01:45.802 ****** 2025-09-19 11:46:25.837589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.837599 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.837614 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.837629 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.837643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.837653 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.837662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.837672 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.837682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.837693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.837703 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.837712 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.837721 | orchestrator | 2025-09-19 11:46:25.837731 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-19 11:46:25.837741 | orchestrator | Friday 19 September 2025 11:44:14 +0000 (0:00:01.887) 0:01:47.690 ****** 2025-09-19 11:46:25.837757 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.837767 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.837776 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.837786 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.837795 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.837804 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.837814 | orchestrator | 2025-09-19 11:46:25.838005 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-19 11:46:25.838064 | orchestrator | Friday 19 September 2025 11:44:17 +0000 (0:00:02.666) 0:01:50.357 ****** 2025-09-19 11:46:25.838074 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.838084 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.838093 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.838103 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:46:25.838112 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:46:25.838122 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:46:25.838131 | orchestrator | 2025-09-19 11:46:25.838141 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-09-19 11:46:25.838150 | orchestrator | Friday 19 September 2025 11:44:20 +0000 (0:00:03.428) 0:01:53.785 ****** 2025-09-19 11:46:25.838160 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.838169 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.838184 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.838194 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.838203 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.838213 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.838222 | orchestrator | 2025-09-19 11:46:25.838232 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-19 11:46:25.838241 | orchestrator | Friday 19 September 2025 11:44:24 +0000 (0:00:03.152) 0:01:56.938 ****** 2025-09-19 11:46:25.838251 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.838260 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.838270 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.838279 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.838289 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.838298 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.838307 | orchestrator | 2025-09-19 11:46:25.838317 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-19 11:46:25.838326 | orchestrator | Friday 19 September 2025 11:44:27 +0000 (0:00:03.091) 0:02:00.030 ****** 2025-09-19 11:46:25.838336 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.838346 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.838355 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.838364 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.838374 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.838383 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.838393 | orchestrator | 2025-09-19 11:46:25.838402 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-19 11:46:25.838412 | orchestrator | Friday 19 September 2025 11:44:29 +0000 (0:00:02.536) 0:02:02.567 ****** 2025-09-19 11:46:25.838421 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.838430 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.838440 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.838449 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.838459 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.838468 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.838477 | orchestrator | 2025-09-19 11:46:25.838487 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-19 11:46:25.838496 | orchestrator | Friday 19 September 2025 11:44:31 +0000 (0:00:02.276) 0:02:04.843 ****** 2025-09-19 11:46:25.838506 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.838515 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.838534 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.838543 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.838553 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.838562 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.838572 | orchestrator | 2025-09-19 11:46:25.838581 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-19 11:46:25.838591 | orchestrator | Friday 19 September 2025 11:44:34 +0000 (0:00:02.362) 0:02:07.205 ****** 2025-09-19 11:46:25.838600 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.838610 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.838621 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.838631 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.838642 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.838653 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.838664 | orchestrator | 2025-09-19 11:46:25.838675 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-19 11:46:25.838685 | orchestrator | Friday 19 September 2025 11:44:37 +0000 (0:00:02.944) 0:02:10.150 ****** 2025-09-19 11:46:25.838696 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.838707 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.838718 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.838729 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.838740 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.838750 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.838761 | orchestrator | 2025-09-19 11:46:25.838772 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-19 11:46:25.838783 | orchestrator | Friday 19 September 2025 11:44:40 +0000 (0:00:02.793) 0:02:12.943 ****** 2025-09-19 11:46:25.838793 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.838804 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.838815 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.838826 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.838837 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.838848 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.838859 | orchestrator | 2025-09-19 11:46:25.838870 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-19 11:46:25.838881 | orchestrator | Friday 19 September 2025 11:44:42 +0000 (0:00:02.241) 0:02:15.185 ****** 2025-09-19 11:46:25.838892 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 11:46:25.838903 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.838914 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 11:46:25.838926 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.838937 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 11:46:25.838948 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.839011 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 11:46:25.839023 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.839033 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 11:46:25.839042 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.839052 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 11:46:25.839061 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.839070 | orchestrator | 2025-09-19 11:46:25.839080 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-19 11:46:25.839090 | orchestrator | Friday 19 September 2025 11:44:44 +0000 (0:00:02.234) 0:02:17.419 ****** 2025-09-19 11:46:25.839105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.839122 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.839132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.839142 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.839152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.839162 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.839176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 11:46:25.839186 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.839200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.839216 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.839226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 11:46:25.839235 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.839245 | orchestrator | 2025-09-19 11:46:25.839254 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-19 11:46:25.839264 | orchestrator | Friday 19 September 2025 11:44:47 +0000 (0:00:03.109) 0:02:20.529 ****** 2025-09-19 11:46:25.839274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.839284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.839300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.839324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.839335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 11:46:25.839345 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 11:46:25.839355 | orchestrator | 2025-09-19 11:46:25.839364 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 11:46:25.839374 | orchestrator | Friday 19 September 2025 11:44:52 +0000 (0:00:04.383) 0:02:24.912 ****** 2025-09-19 11:46:25.839384 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:25.839393 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:25.839403 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:25.839412 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:46:25.839421 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:46:25.839431 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:46:25.839440 | orchestrator | 2025-09-19 11:46:25.839450 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-19 11:46:25.839459 | orchestrator | Friday 19 September 2025 11:44:52 +0000 (0:00:00.789) 0:02:25.702 ****** 2025-09-19 11:46:25.839469 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:25.839478 | orchestrator | 2025-09-19 11:46:25.839488 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-19 11:46:25.839497 | orchestrator | Friday 19 September 2025 11:44:55 +0000 (0:00:02.302) 0:02:28.004 ****** 2025-09-19 11:46:25.839506 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:25.839516 | orchestrator | 2025-09-19 11:46:25.839525 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-19 11:46:25.839538 | orchestrator | Friday 19 September 2025 11:44:57 +0000 (0:00:02.322) 0:02:30.326 ****** 2025-09-19 11:46:25.839545 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:25.839553 | orchestrator | 2025-09-19 11:46:25.839561 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 11:46:25.839569 | orchestrator | Friday 19 September 2025 11:45:35 +0000 (0:00:38.396) 0:03:08.722 ****** 2025-09-19 11:46:25.839576 | orchestrator | 2025-09-19 11:46:25.839584 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 11:46:25.839592 | orchestrator | Friday 19 September 2025 11:45:35 +0000 (0:00:00.063) 0:03:08.786 ****** 2025-09-19 11:46:25.839600 | orchestrator | 2025-09-19 11:46:25.839612 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 11:46:25.839620 | orchestrator | Friday 19 September 2025 11:45:35 +0000 (0:00:00.060) 0:03:08.847 ****** 2025-09-19 11:46:25.839627 | orchestrator | 2025-09-19 11:46:25.839635 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 11:46:25.839643 | orchestrator | Friday 19 September 2025 11:45:36 +0000 (0:00:00.061) 0:03:08.908 ****** 2025-09-19 11:46:25.839650 | orchestrator | 2025-09-19 11:46:25.839658 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 11:46:25.839666 | orchestrator | Friday 19 September 2025 11:45:36 +0000 (0:00:00.210) 0:03:09.118 ****** 2025-09-19 11:46:25.839674 | orchestrator | 2025-09-19 11:46:25.839685 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 11:46:25.839693 | orchestrator | Friday 19 September 2025 11:45:36 +0000 (0:00:00.114) 0:03:09.233 ****** 2025-09-19 11:46:25.839700 | orchestrator | 2025-09-19 11:46:25.839708 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-19 11:46:25.839716 | orchestrator | Friday 19 September 2025 11:45:36 +0000 (0:00:00.065) 0:03:09.299 ****** 2025-09-19 11:46:25.839724 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:25.839731 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:46:25.839739 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:46:25.839746 | orchestrator | 2025-09-19 11:46:25.839754 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-19 11:46:25.839762 | orchestrator | Friday 19 September 2025 11:46:01 +0000 (0:00:25.555) 0:03:34.854 ****** 2025-09-19 11:46:25.839769 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:46:25.839777 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:46:25.839785 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:46:25.839793 | orchestrator | 2025-09-19 11:46:25.839800 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:46:25.839809 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 11:46:25.839817 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-19 11:46:25.839825 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-19 11:46:25.839833 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-19 11:46:25.839841 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-19 11:46:25.839848 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-09-19 11:46:25.839856 | orchestrator | 2025-09-19 11:46:25.839864 | orchestrator | 2025-09-19 11:46:25.839872 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:46:25.839879 | orchestrator | Friday 19 September 2025 11:46:23 +0000 (0:00:21.567) 0:03:56.422 ****** 2025-09-19 11:46:25.839895 | orchestrator | =============================================================================== 2025-09-19 11:46:25.839903 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 38.40s 2025-09-19 11:46:25.839910 | orchestrator | neutron : Restart neutron-server container ----------------------------- 25.56s 2025-09-19 11:46:25.839918 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 21.57s 2025-09-19 11:46:25.839926 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.18s 2025-09-19 11:46:25.839933 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.99s 2025-09-19 11:46:25.839941 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.77s 2025-09-19 11:46:25.839948 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 5.11s 2025-09-19 11:46:25.839956 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.87s 2025-09-19 11:46:25.839976 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.38s 2025-09-19 11:46:25.839984 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.92s 2025-09-19 11:46:25.839992 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.91s 2025-09-19 11:46:25.840000 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.61s 2025-09-19 11:46:25.840007 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.46s 2025-09-19 11:46:25.840015 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.43s 2025-09-19 11:46:25.840023 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.41s 2025-09-19 11:46:25.840030 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.38s 2025-09-19 11:46:25.840038 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.33s 2025-09-19 11:46:25.840046 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 3.15s 2025-09-19 11:46:25.840053 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 3.14s 2025-09-19 11:46:25.840061 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.11s 2025-09-19 11:46:25.840073 | orchestrator | 2025-09-19 11:46:25 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:46:25.840081 | orchestrator | 2025-09-19 11:46:25 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state STARTED 2025-09-19 11:46:25.840089 | orchestrator | 2025-09-19 11:46:25 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:25.840096 | orchestrator | 2025-09-19 11:46:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:28.883323 | orchestrator | 2025-09-19 11:46:28 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:28.884121 | orchestrator | 2025-09-19 11:46:28 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:46:28.886396 | orchestrator | 2025-09-19 11:46:28 | INFO  | Task 9d3319b8-7961-442e-9843-fe5e9669aae8 is in state SUCCESS 2025-09-19 11:46:28.888171 | orchestrator | 2025-09-19 11:46:28.888220 | orchestrator | 2025-09-19 11:46:28.888241 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:46:28.888263 | orchestrator | 2025-09-19 11:46:28.888282 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:46:28.888302 | orchestrator | Friday 19 September 2025 11:45:16 +0000 (0:00:00.400) 0:00:00.400 ****** 2025-09-19 11:46:28.888314 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:46:28.888325 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:46:28.888336 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:46:28.888346 | orchestrator | 2025-09-19 11:46:28.888357 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:46:28.888391 | orchestrator | Friday 19 September 2025 11:45:16 +0000 (0:00:00.268) 0:00:00.668 ****** 2025-09-19 11:46:28.888403 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-19 11:46:28.888414 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-19 11:46:28.888424 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-19 11:46:28.888435 | orchestrator | 2025-09-19 11:46:28.888445 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-19 11:46:28.888456 | orchestrator | 2025-09-19 11:46:28.888466 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 11:46:28.888477 | orchestrator | Friday 19 September 2025 11:45:17 +0000 (0:00:00.365) 0:00:01.034 ****** 2025-09-19 11:46:28.888488 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:46:28.888499 | orchestrator | 2025-09-19 11:46:28.888509 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-19 11:46:28.888520 | orchestrator | Friday 19 September 2025 11:45:17 +0000 (0:00:00.545) 0:00:01.579 ****** 2025-09-19 11:46:28.888532 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-19 11:46:28.888542 | orchestrator | 2025-09-19 11:46:28.888553 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-19 11:46:28.888564 | orchestrator | Friday 19 September 2025 11:45:21 +0000 (0:00:03.578) 0:00:05.157 ****** 2025-09-19 11:46:28.888574 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-19 11:46:28.888585 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-19 11:46:28.888596 | orchestrator | 2025-09-19 11:46:28.888606 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-19 11:46:28.888617 | orchestrator | Friday 19 September 2025 11:45:28 +0000 (0:00:07.268) 0:00:12.426 ****** 2025-09-19 11:46:28.888627 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:46:28.888638 | orchestrator | 2025-09-19 11:46:28.888648 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-19 11:46:28.888659 | orchestrator | Friday 19 September 2025 11:45:32 +0000 (0:00:03.475) 0:00:15.902 ****** 2025-09-19 11:46:28.888669 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:46:28.888680 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-19 11:46:28.888690 | orchestrator | 2025-09-19 11:46:28.888701 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-19 11:46:28.888712 | orchestrator | Friday 19 September 2025 11:45:36 +0000 (0:00:04.149) 0:00:20.052 ****** 2025-09-19 11:46:28.888728 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:46:28.888746 | orchestrator | 2025-09-19 11:46:28.888765 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-19 11:46:28.888783 | orchestrator | Friday 19 September 2025 11:45:39 +0000 (0:00:03.566) 0:00:23.619 ****** 2025-09-19 11:46:28.888802 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-19 11:46:28.888821 | orchestrator | 2025-09-19 11:46:28.888851 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 11:46:28.888872 | orchestrator | Friday 19 September 2025 11:45:44 +0000 (0:00:04.766) 0:00:28.385 ****** 2025-09-19 11:46:28.888890 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:28.888911 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:28.888930 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:28.888999 | orchestrator | 2025-09-19 11:46:28.889014 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-19 11:46:28.889027 | orchestrator | Friday 19 September 2025 11:45:45 +0000 (0:00:00.544) 0:00:28.930 ****** 2025-09-19 11:46:28.889054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.889098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.889113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.889126 | orchestrator | 2025-09-19 11:46:28.889138 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-19 11:46:28.889149 | orchestrator | Friday 19 September 2025 11:45:46 +0000 (0:00:01.230) 0:00:30.160 ****** 2025-09-19 11:46:28.889159 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:28.889170 | orchestrator | 2025-09-19 11:46:28.889181 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-19 11:46:28.889191 | orchestrator | Friday 19 September 2025 11:45:46 +0000 (0:00:00.094) 0:00:30.255 ****** 2025-09-19 11:46:28.889202 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:28.889212 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:28.889223 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:28.889234 | orchestrator | 2025-09-19 11:46:28.889244 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 11:46:28.889255 | orchestrator | Friday 19 September 2025 11:45:46 +0000 (0:00:00.325) 0:00:30.580 ****** 2025-09-19 11:46:28.889266 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:46:28.889276 | orchestrator | 2025-09-19 11:46:28.889287 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-19 11:46:28.889298 | orchestrator | Friday 19 September 2025 11:45:47 +0000 (0:00:00.720) 0:00:31.302 ****** 2025-09-19 11:46:28.889316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.889339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.889352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.889363 | orchestrator | 2025-09-19 11:46:28.889374 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-19 11:46:28.889385 | orchestrator | Friday 19 September 2025 11:45:49 +0000 (0:00:01.560) 0:00:32.863 ****** 2025-09-19 11:46:28.889396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:46:28.889407 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:28.889425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:46:28.889437 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:28.889458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:46:28.889470 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:28.889481 | orchestrator | 2025-09-19 11:46:28.889492 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-19 11:46:28.889503 | orchestrator | Friday 19 September 2025 11:45:50 +0000 (0:00:01.031) 0:00:33.894 ****** 2025-09-19 11:46:28.889514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:46:28.889525 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:28.889536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:46:28.889553 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:28.889564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:46:28.889575 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:28.889586 | orchestrator | 2025-09-19 11:46:28.889597 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-19 11:46:28.889608 | orchestrator | Friday 19 September 2025 11:45:50 +0000 (0:00:00.609) 0:00:34.504 ****** 2025-09-19 11:46:28.889629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.889642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.889653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.889670 | orchestrator | 2025-09-19 11:46:28.889681 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-19 11:46:28.889692 | orchestrator | Friday 19 September 2025 11:45:51 +0000 (0:00:01.297) 0:00:35.801 ****** 2025-09-19 11:46:28.889703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.889719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.889738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.889750 | orchestrator | 2025-09-19 11:46:28.889761 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-19 11:46:28.889772 | orchestrator | Friday 19 September 2025 11:45:54 +0000 (0:00:02.365) 0:00:38.167 ****** 2025-09-19 11:46:28.889783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 11:46:28.889793 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 11:46:28.889804 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 11:46:28.889815 | orchestrator | 2025-09-19 11:46:28.889826 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-19 11:46:28.889842 | orchestrator | Friday 19 September 2025 11:45:55 +0000 (0:00:01.556) 0:00:39.723 ****** 2025-09-19 11:46:28.889852 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:28.889863 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:46:28.889873 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:46:28.889884 | orchestrator | 2025-09-19 11:46:28.889895 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-19 11:46:28.889905 | orchestrator | Friday 19 September 2025 11:45:58 +0000 (0:00:02.191) 0:00:41.915 ****** 2025-09-19 11:46:28.889917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:46:28.889928 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:28.889939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:46:28.889965 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:28.889994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 11:46:28.890007 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:28.890068 | orchestrator | 2025-09-19 11:46:28.890082 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-19 11:46:28.890093 | orchestrator | Friday 19 September 2025 11:45:59 +0000 (0:00:01.360) 0:00:43.276 ****** 2025-09-19 11:46:28.890104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.890124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.890135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 11:46:28.890147 | orchestrator | 2025-09-19 11:46:28.890158 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-19 11:46:28.890168 | orchestrator | Friday 19 September 2025 11:46:01 +0000 (0:00:01.983) 0:00:45.259 ****** 2025-09-19 11:46:28.890179 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:28.890190 | orchestrator | 2025-09-19 11:46:28.890201 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-19 11:46:28.890216 | orchestrator | Friday 19 September 2025 11:46:03 +0000 (0:00:02.339) 0:00:47.599 ****** 2025-09-19 11:46:28.890227 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:28.890238 | orchestrator | 2025-09-19 11:46:28.890248 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-19 11:46:28.890260 | orchestrator | Friday 19 September 2025 11:46:06 +0000 (0:00:02.344) 0:00:49.943 ****** 2025-09-19 11:46:28.890277 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:28.890289 | orchestrator | 2025-09-19 11:46:28.890300 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 11:46:28.890311 | orchestrator | Friday 19 September 2025 11:46:17 +0000 (0:00:11.852) 0:01:01.795 ****** 2025-09-19 11:46:28.890322 | orchestrator | 2025-09-19 11:46:28.890332 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 11:46:28.890343 | orchestrator | Friday 19 September 2025 11:46:18 +0000 (0:00:00.063) 0:01:01.858 ****** 2025-09-19 11:46:28.890354 | orchestrator | 2025-09-19 11:46:28.890371 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 11:46:28.890382 | orchestrator | Friday 19 September 2025 11:46:18 +0000 (0:00:00.063) 0:01:01.922 ****** 2025-09-19 11:46:28.890393 | orchestrator | 2025-09-19 11:46:28.890404 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-19 11:46:28.890415 | orchestrator | Friday 19 September 2025 11:46:18 +0000 (0:00:00.063) 0:01:01.986 ****** 2025-09-19 11:46:28.890426 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:28.890436 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:46:28.890447 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:46:28.890458 | orchestrator | 2025-09-19 11:46:28.890469 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:46:28.890481 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:46:28.890493 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:46:28.890504 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:46:28.890515 | orchestrator | 2025-09-19 11:46:28.890525 | orchestrator | 2025-09-19 11:46:28.890536 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:46:28.890547 | orchestrator | Friday 19 September 2025 11:46:27 +0000 (0:00:09.698) 0:01:11.684 ****** 2025-09-19 11:46:28.890558 | orchestrator | =============================================================================== 2025-09-19 11:46:28.890569 | orchestrator | placement : Running placement bootstrap container ---------------------- 11.85s 2025-09-19 11:46:28.890580 | orchestrator | placement : Restart placement-api container ----------------------------- 9.70s 2025-09-19 11:46:28.890590 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.27s 2025-09-19 11:46:28.890601 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.77s 2025-09-19 11:46:28.890612 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.15s 2025-09-19 11:46:28.890623 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.58s 2025-09-19 11:46:28.890634 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.57s 2025-09-19 11:46:28.890644 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.48s 2025-09-19 11:46:28.890655 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.37s 2025-09-19 11:46:28.890666 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.34s 2025-09-19 11:46:28.890677 | orchestrator | placement : Creating placement databases -------------------------------- 2.34s 2025-09-19 11:46:28.890688 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.19s 2025-09-19 11:46:28.890699 | orchestrator | placement : Check placement containers ---------------------------------- 1.98s 2025-09-19 11:46:28.890710 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.56s 2025-09-19 11:46:28.890720 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.56s 2025-09-19 11:46:28.890731 | orchestrator | placement : Copying over existing policy file --------------------------- 1.36s 2025-09-19 11:46:28.890742 | orchestrator | placement : Copying over config.json files for services ----------------- 1.30s 2025-09-19 11:46:28.890753 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.23s 2025-09-19 11:46:28.890764 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.03s 2025-09-19 11:46:28.890774 | orchestrator | placement : include_tasks ----------------------------------------------- 0.72s 2025-09-19 11:46:28.890785 | orchestrator | 2025-09-19 11:46:28 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:28.890802 | orchestrator | 2025-09-19 11:46:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:31.921836 | orchestrator | 2025-09-19 11:46:31 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:31.923285 | orchestrator | 2025-09-19 11:46:31 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:46:31.925027 | orchestrator | 2025-09-19 11:46:31 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:46:31.926594 | orchestrator | 2025-09-19 11:46:31 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:31.926634 | orchestrator | 2025-09-19 11:46:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:34.958000 | orchestrator | 2025-09-19 11:46:34 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:34.958439 | orchestrator | 2025-09-19 11:46:34 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:46:34.963104 | orchestrator | 2025-09-19 11:46:34 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:46:34.964870 | orchestrator | 2025-09-19 11:46:34 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:34.964982 | orchestrator | 2025-09-19 11:46:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:37.991731 | orchestrator | 2025-09-19 11:46:37 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:37.993028 | orchestrator | 2025-09-19 11:46:37 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:46:37.994464 | orchestrator | 2025-09-19 11:46:37 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:46:37.997396 | orchestrator | 2025-09-19 11:46:37 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:37.997424 | orchestrator | 2025-09-19 11:46:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:41.035656 | orchestrator | 2025-09-19 11:46:41 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:41.038215 | orchestrator | 2025-09-19 11:46:41 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:46:41.039774 | orchestrator | 2025-09-19 11:46:41 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:46:41.041140 | orchestrator | 2025-09-19 11:46:41 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:41.041369 | orchestrator | 2025-09-19 11:46:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:44.076891 | orchestrator | 2025-09-19 11:46:44 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:44.079130 | orchestrator | 2025-09-19 11:46:44 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:46:44.083208 | orchestrator | 2025-09-19 11:46:44 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:46:44.084520 | orchestrator | 2025-09-19 11:46:44 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:44.084720 | orchestrator | 2025-09-19 11:46:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:47.127685 | orchestrator | 2025-09-19 11:46:47 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:47.129633 | orchestrator | 2025-09-19 11:46:47 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:46:47.131549 | orchestrator | 2025-09-19 11:46:47 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:46:47.134150 | orchestrator | 2025-09-19 11:46:47 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:47.134176 | orchestrator | 2025-09-19 11:46:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:50.176061 | orchestrator | 2025-09-19 11:46:50 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:50.177394 | orchestrator | 2025-09-19 11:46:50 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:46:50.180777 | orchestrator | 2025-09-19 11:46:50 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:46:50.183380 | orchestrator | 2025-09-19 11:46:50 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:50.184674 | orchestrator | 2025-09-19 11:46:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:53.232217 | orchestrator | 2025-09-19 11:46:53 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state STARTED 2025-09-19 11:46:53.233946 | orchestrator | 2025-09-19 11:46:53 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:46:53.236222 | orchestrator | 2025-09-19 11:46:53 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:46:53.238760 | orchestrator | 2025-09-19 11:46:53 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:53.238913 | orchestrator | 2025-09-19 11:46:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:56.285015 | orchestrator | 2025-09-19 11:46:56 | INFO  | Task e3579d3b-f2cc-41d8-aec8-4d3cc877ac6f is in state SUCCESS 2025-09-19 11:46:56.286837 | orchestrator | 2025-09-19 11:46:56.286945 | orchestrator | 2025-09-19 11:46:56.286960 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:46:56.286973 | orchestrator | 2025-09-19 11:46:56.286984 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:46:56.286996 | orchestrator | Friday 19 September 2025 11:43:56 +0000 (0:00:00.367) 0:00:00.367 ****** 2025-09-19 11:46:56.287007 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:46:56.287018 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:46:56.287030 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:46:56.287040 | orchestrator | 2025-09-19 11:46:56.287051 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:46:56.287062 | orchestrator | Friday 19 September 2025 11:43:56 +0000 (0:00:00.296) 0:00:00.664 ****** 2025-09-19 11:46:56.287074 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-19 11:46:56.287085 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-19 11:46:56.287095 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-19 11:46:56.287106 | orchestrator | 2025-09-19 11:46:56.287117 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-19 11:46:56.287127 | orchestrator | 2025-09-19 11:46:56.287138 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 11:46:56.287648 | orchestrator | Friday 19 September 2025 11:43:57 +0000 (0:00:00.782) 0:00:01.446 ****** 2025-09-19 11:46:56.287665 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:46:56.287677 | orchestrator | 2025-09-19 11:46:56.287688 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-19 11:46:56.287699 | orchestrator | Friday 19 September 2025 11:43:58 +0000 (0:00:00.955) 0:00:02.401 ****** 2025-09-19 11:46:56.287710 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-19 11:46:56.287720 | orchestrator | 2025-09-19 11:46:56.287731 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-19 11:46:56.287768 | orchestrator | Friday 19 September 2025 11:44:02 +0000 (0:00:04.054) 0:00:06.456 ****** 2025-09-19 11:46:56.287779 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-19 11:46:56.287790 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-19 11:46:56.287801 | orchestrator | 2025-09-19 11:46:56.287811 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-19 11:46:56.287822 | orchestrator | Friday 19 September 2025 11:44:09 +0000 (0:00:06.762) 0:00:13.218 ****** 2025-09-19 11:46:56.287833 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:46:56.287844 | orchestrator | 2025-09-19 11:46:56.287855 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-19 11:46:56.287865 | orchestrator | Friday 19 September 2025 11:44:12 +0000 (0:00:03.463) 0:00:16.681 ****** 2025-09-19 11:46:56.287876 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:46:56.287914 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-19 11:46:56.287926 | orchestrator | 2025-09-19 11:46:56.287938 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-19 11:46:56.287949 | orchestrator | Friday 19 September 2025 11:44:16 +0000 (0:00:04.016) 0:00:20.698 ****** 2025-09-19 11:46:56.287960 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:46:56.287971 | orchestrator | 2025-09-19 11:46:56.287982 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-19 11:46:56.287992 | orchestrator | Friday 19 September 2025 11:44:20 +0000 (0:00:03.594) 0:00:24.292 ****** 2025-09-19 11:46:56.288003 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-19 11:46:56.288014 | orchestrator | 2025-09-19 11:46:56.288025 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-19 11:46:56.288035 | orchestrator | Friday 19 September 2025 11:44:24 +0000 (0:00:04.397) 0:00:28.689 ****** 2025-09-19 11:46:56.288049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.288098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.288113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.288139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.288817 | orchestrator | 2025-09-19 11:46:56.288829 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-19 11:46:56.288841 | orchestrator | Friday 19 September 2025 11:44:28 +0000 (0:00:04.176) 0:00:32.866 ****** 2025-09-19 11:46:56.288851 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:56.288863 | orchestrator | 2025-09-19 11:46:56.288874 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-19 11:46:56.288917 | orchestrator | Friday 19 September 2025 11:44:29 +0000 (0:00:00.203) 0:00:33.070 ****** 2025-09-19 11:46:56.289176 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:56.289245 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:56.289263 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:56.289273 | orchestrator | 2025-09-19 11:46:56.289285 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 11:46:56.289295 | orchestrator | Friday 19 September 2025 11:44:29 +0000 (0:00:00.304) 0:00:33.374 ****** 2025-09-19 11:46:56.289306 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:46:56.289317 | orchestrator | 2025-09-19 11:46:56.289328 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-19 11:46:56.289339 | orchestrator | Friday 19 September 2025 11:44:29 +0000 (0:00:00.541) 0:00:33.916 ****** 2025-09-19 11:46:56.289393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.289418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.289430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.289442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.289744 | orchestrator | 2025-09-19 11:46:56.289756 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-19 11:46:56.289768 | orchestrator | Friday 19 September 2025 11:44:36 +0000 (0:00:06.360) 0:00:40.277 ****** 2025-09-19 11:46:56.289814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.289957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:46:56.289990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290111 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:56.290136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.290205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:46:56.290220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290273 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:56.290290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.290330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:46:56.290343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290389 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:56.290399 | orchestrator | 2025-09-19 11:46:56.290409 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-19 11:46:56.290419 | orchestrator | Friday 19 September 2025 11:44:37 +0000 (0:00:01.092) 0:00:41.370 ****** 2025-09-19 11:46:56.290434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.290470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:46:56.290482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290528 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:56.290631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.290673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:46:56.290686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290733 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:56.290744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.290786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:46:56.290799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.290845 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:56.290855 | orchestrator | 2025-09-19 11:46:56.290865 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-19 11:46:56.290875 | orchestrator | Friday 19 September 2025 11:44:39 +0000 (0:00:02.115) 0:00:43.485 ****** 2025-09-19 11:46:56.290909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.290950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.290963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.290974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.290991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291270 | orchestrator | 2025-09-19 11:46:56.291279 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-19 11:46:56.291289 | orchestrator | Friday 19 September 2025 11:44:45 +0000 (0:00:06.400) 0:00:49.886 ****** 2025-09-19 11:46:56.291299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.291343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.291356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.291372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291558 | orchestrator | 2025-09-19 11:46:56.291568 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-19 11:46:56.291578 | orchestrator | Friday 19 September 2025 11:45:02 +0000 (0:00:16.806) 0:01:06.693 ****** 2025-09-19 11:46:56.291587 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 11:46:56.291597 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 11:46:56.291607 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 11:46:56.291617 | orchestrator | 2025-09-19 11:46:56.291627 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-19 11:46:56.291636 | orchestrator | Friday 19 September 2025 11:45:06 +0000 (0:00:04.198) 0:01:10.891 ****** 2025-09-19 11:46:56.291646 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 11:46:56.291656 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 11:46:56.291665 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 11:46:56.291675 | orchestrator | 2025-09-19 11:46:56.291685 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-19 11:46:56.291695 | orchestrator | Friday 19 September 2025 11:45:09 +0000 (0:00:02.214) 0:01:13.106 ****** 2025-09-19 11:46:56.291716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.291734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.291745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.291756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.291781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.291797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.291815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.291835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.291845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.291855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.291877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292108 | orchestrator | 2025-09-19 11:46:56.292120 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-19 11:46:56.292131 | orchestrator | Friday 19 September 2025 11:45:12 +0000 (0:00:02.851) 0:01:15.957 ****** 2025-09-19 11:46:56.292161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.292180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.292191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.292202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292371 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292392 | orchestrator | 2025-09-19 11:46:56.292408 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 11:46:56.292418 | orchestrator | Friday 19 September 2025 11:45:14 +0000 (0:00:02.413) 0:01:18.371 ****** 2025-09-19 11:46:56.292428 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:56.292438 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:56.292447 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:56.292457 | orchestrator | 2025-09-19 11:46:56.292467 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-19 11:46:56.292475 | orchestrator | Friday 19 September 2025 11:45:14 +0000 (0:00:00.392) 0:01:18.764 ****** 2025-09-19 11:46:56.292492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.292501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:46:56.292510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292548 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:56.292564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.292573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:46:56.292582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292619 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:56.292634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 11:46:56.292644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 11:46:56.292652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:46:56.292690 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:56.292698 | orchestrator | 2025-09-19 11:46:56.292706 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-19 11:46:56.292714 | orchestrator | Friday 19 September 2025 11:45:15 +0000 (0:00:00.986) 0:01:19.750 ****** 2025-09-19 11:46:56.292731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.292741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.292749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 11:46:56.292758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:46:56.292936 | orchestrator | 2025-09-19 11:46:56.292944 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 11:46:56.292952 | orchestrator | Friday 19 September 2025 11:45:20 +0000 (0:00:04.736) 0:01:24.487 ****** 2025-09-19 11:46:56.292960 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:46:56.292968 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:46:56.292976 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:46:56.292984 | orchestrator | 2025-09-19 11:46:56.292992 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-19 11:46:56.293000 | orchestrator | Friday 19 September 2025 11:45:20 +0000 (0:00:00.276) 0:01:24.763 ****** 2025-09-19 11:46:56.293008 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-19 11:46:56.293016 | orchestrator | 2025-09-19 11:46:56.293024 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-19 11:46:56.293032 | orchestrator | Friday 19 September 2025 11:45:23 +0000 (0:00:02.286) 0:01:27.050 ****** 2025-09-19 11:46:56.293043 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:46:56.293052 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-19 11:46:56.293060 | orchestrator | 2025-09-19 11:46:56.293068 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-19 11:46:56.293080 | orchestrator | Friday 19 September 2025 11:45:25 +0000 (0:00:02.822) 0:01:29.872 ****** 2025-09-19 11:46:56.293089 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:56.293097 | orchestrator | 2025-09-19 11:46:56.293105 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 11:46:56.293113 | orchestrator | Friday 19 September 2025 11:45:40 +0000 (0:00:14.898) 0:01:44.770 ****** 2025-09-19 11:46:56.293121 | orchestrator | 2025-09-19 11:46:56.293128 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 11:46:56.293136 | orchestrator | Friday 19 September 2025 11:45:40 +0000 (0:00:00.124) 0:01:44.894 ****** 2025-09-19 11:46:56.293144 | orchestrator | 2025-09-19 11:46:56.293152 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 11:46:56.293160 | orchestrator | Friday 19 September 2025 11:45:41 +0000 (0:00:00.144) 0:01:45.038 ****** 2025-09-19 11:46:56.293168 | orchestrator | 2025-09-19 11:46:56.293176 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-19 11:46:56.293184 | orchestrator | Friday 19 September 2025 11:45:41 +0000 (0:00:00.168) 0:01:45.207 ****** 2025-09-19 11:46:56.293192 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:56.293205 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:46:56.293213 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:46:56.293221 | orchestrator | 2025-09-19 11:46:56.293229 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-19 11:46:56.293237 | orchestrator | Friday 19 September 2025 11:45:56 +0000 (0:00:14.836) 0:02:00.043 ****** 2025-09-19 11:46:56.293244 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:56.293252 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:46:56.293260 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:46:56.293268 | orchestrator | 2025-09-19 11:46:56.293276 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-19 11:46:56.293284 | orchestrator | Friday 19 September 2025 11:46:09 +0000 (0:00:13.222) 0:02:13.265 ****** 2025-09-19 11:46:56.293292 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:56.293299 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:46:56.293307 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:46:56.293315 | orchestrator | 2025-09-19 11:46:56.293323 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-19 11:46:56.293331 | orchestrator | Friday 19 September 2025 11:46:19 +0000 (0:00:09.977) 0:02:23.243 ****** 2025-09-19 11:46:56.293340 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:46:56.293347 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:46:56.293355 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:56.293363 | orchestrator | 2025-09-19 11:46:56.293371 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-19 11:46:56.293379 | orchestrator | Friday 19 September 2025 11:46:29 +0000 (0:00:09.862) 0:02:33.106 ****** 2025-09-19 11:46:56.293387 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:56.293395 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:46:56.293403 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:46:56.293411 | orchestrator | 2025-09-19 11:46:56.293419 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-19 11:46:56.293427 | orchestrator | Friday 19 September 2025 11:46:34 +0000 (0:00:05.514) 0:02:38.621 ****** 2025-09-19 11:46:56.293435 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:56.293442 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:46:56.293450 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:46:56.293458 | orchestrator | 2025-09-19 11:46:56.293466 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-19 11:46:56.293474 | orchestrator | Friday 19 September 2025 11:46:46 +0000 (0:00:12.173) 0:02:50.795 ****** 2025-09-19 11:46:56.293481 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:46:56.293489 | orchestrator | 2025-09-19 11:46:56.293497 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:46:56.293506 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 11:46:56.293514 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:46:56.293523 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:46:56.293531 | orchestrator | 2025-09-19 11:46:56.293539 | orchestrator | 2025-09-19 11:46:56.293546 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:46:56.293554 | orchestrator | Friday 19 September 2025 11:46:52 +0000 (0:00:06.144) 0:02:56.939 ****** 2025-09-19 11:46:56.293562 | orchestrator | =============================================================================== 2025-09-19 11:46:56.293570 | orchestrator | designate : Copying over designate.conf -------------------------------- 16.81s 2025-09-19 11:46:56.293578 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.90s 2025-09-19 11:46:56.293586 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.84s 2025-09-19 11:46:56.293599 | orchestrator | designate : Restart designate-api container ---------------------------- 13.22s 2025-09-19 11:46:56.293607 | orchestrator | designate : Restart designate-worker container ------------------------- 12.17s 2025-09-19 11:46:56.293615 | orchestrator | designate : Restart designate-central container ------------------------- 9.98s 2025-09-19 11:46:56.293622 | orchestrator | designate : Restart designate-producer container ------------------------ 9.86s 2025-09-19 11:46:56.293630 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.76s 2025-09-19 11:46:56.293642 | orchestrator | designate : Copying over config.json files for services ----------------- 6.40s 2025-09-19 11:46:56.293650 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.36s 2025-09-19 11:46:56.293662 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.14s 2025-09-19 11:46:56.293670 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.52s 2025-09-19 11:46:56.293678 | orchestrator | designate : Check designate containers ---------------------------------- 4.74s 2025-09-19 11:46:56.293686 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.40s 2025-09-19 11:46:56.293694 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.20s 2025-09-19 11:46:56.293701 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.18s 2025-09-19 11:46:56.293709 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.05s 2025-09-19 11:46:56.293717 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.02s 2025-09-19 11:46:56.293725 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.59s 2025-09-19 11:46:56.293733 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.46s 2025-09-19 11:46:56.293741 | orchestrator | 2025-09-19 11:46:56 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:46:56.293749 | orchestrator | 2025-09-19 11:46:56 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:46:56.293757 | orchestrator | 2025-09-19 11:46:56 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:56.293765 | orchestrator | 2025-09-19 11:46:56 | INFO  | Task 8928ddd1-cc17-41cc-a87c-d7ed7c415e2a is in state STARTED 2025-09-19 11:46:56.293773 | orchestrator | 2025-09-19 11:46:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:46:59.319530 | orchestrator | 2025-09-19 11:46:59 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:46:59.320286 | orchestrator | 2025-09-19 11:46:59 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:46:59.320783 | orchestrator | 2025-09-19 11:46:59 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:46:59.321857 | orchestrator | 2025-09-19 11:46:59 | INFO  | Task 8928ddd1-cc17-41cc-a87c-d7ed7c415e2a is in state STARTED 2025-09-19 11:46:59.321949 | orchestrator | 2025-09-19 11:46:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:02.358282 | orchestrator | 2025-09-19 11:47:02 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:02.359668 | orchestrator | 2025-09-19 11:47:02 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:02.362051 | orchestrator | 2025-09-19 11:47:02 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:02.363636 | orchestrator | 2025-09-19 11:47:02 | INFO  | Task 8928ddd1-cc17-41cc-a87c-d7ed7c415e2a is in state SUCCESS 2025-09-19 11:47:02.366136 | orchestrator | 2025-09-19 11:47:02 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:02.366169 | orchestrator | 2025-09-19 11:47:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:05.408284 | orchestrator | 2025-09-19 11:47:05 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:05.409774 | orchestrator | 2025-09-19 11:47:05 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:05.411502 | orchestrator | 2025-09-19 11:47:05 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:05.413110 | orchestrator | 2025-09-19 11:47:05 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:05.413147 | orchestrator | 2025-09-19 11:47:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:08.454493 | orchestrator | 2025-09-19 11:47:08 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:08.455458 | orchestrator | 2025-09-19 11:47:08 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:08.457798 | orchestrator | 2025-09-19 11:47:08 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:08.459569 | orchestrator | 2025-09-19 11:47:08 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:08.460107 | orchestrator | 2025-09-19 11:47:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:11.504839 | orchestrator | 2025-09-19 11:47:11 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:11.506101 | orchestrator | 2025-09-19 11:47:11 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:11.507631 | orchestrator | 2025-09-19 11:47:11 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:11.509922 | orchestrator | 2025-09-19 11:47:11 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:11.509993 | orchestrator | 2025-09-19 11:47:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:14.559163 | orchestrator | 2025-09-19 11:47:14 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:14.561089 | orchestrator | 2025-09-19 11:47:14 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:14.563347 | orchestrator | 2025-09-19 11:47:14 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:14.564927 | orchestrator | 2025-09-19 11:47:14 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:14.564952 | orchestrator | 2025-09-19 11:47:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:17.614239 | orchestrator | 2025-09-19 11:47:17 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:17.616390 | orchestrator | 2025-09-19 11:47:17 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:17.618444 | orchestrator | 2025-09-19 11:47:17 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:17.620685 | orchestrator | 2025-09-19 11:47:17 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:17.620708 | orchestrator | 2025-09-19 11:47:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:20.654771 | orchestrator | 2025-09-19 11:47:20 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:20.654891 | orchestrator | 2025-09-19 11:47:20 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:20.655475 | orchestrator | 2025-09-19 11:47:20 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:20.657125 | orchestrator | 2025-09-19 11:47:20 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:20.657160 | orchestrator | 2025-09-19 11:47:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:23.694078 | orchestrator | 2025-09-19 11:47:23 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:23.696081 | orchestrator | 2025-09-19 11:47:23 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:23.699234 | orchestrator | 2025-09-19 11:47:23 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:23.700702 | orchestrator | 2025-09-19 11:47:23 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:23.701526 | orchestrator | 2025-09-19 11:47:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:26.738198 | orchestrator | 2025-09-19 11:47:26 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:26.740061 | orchestrator | 2025-09-19 11:47:26 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:26.741817 | orchestrator | 2025-09-19 11:47:26 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:26.743640 | orchestrator | 2025-09-19 11:47:26 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:26.743680 | orchestrator | 2025-09-19 11:47:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:29.789255 | orchestrator | 2025-09-19 11:47:29 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:29.790696 | orchestrator | 2025-09-19 11:47:29 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:29.794142 | orchestrator | 2025-09-19 11:47:29 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:29.796575 | orchestrator | 2025-09-19 11:47:29 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:29.796605 | orchestrator | 2025-09-19 11:47:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:32.836973 | orchestrator | 2025-09-19 11:47:32 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:32.837322 | orchestrator | 2025-09-19 11:47:32 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:32.838283 | orchestrator | 2025-09-19 11:47:32 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:32.839073 | orchestrator | 2025-09-19 11:47:32 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:32.839169 | orchestrator | 2025-09-19 11:47:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:35.882316 | orchestrator | 2025-09-19 11:47:35 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:35.884516 | orchestrator | 2025-09-19 11:47:35 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:35.886439 | orchestrator | 2025-09-19 11:47:35 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:35.888432 | orchestrator | 2025-09-19 11:47:35 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:35.888484 | orchestrator | 2025-09-19 11:47:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:38.929861 | orchestrator | 2025-09-19 11:47:38 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:38.932456 | orchestrator | 2025-09-19 11:47:38 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:38.934462 | orchestrator | 2025-09-19 11:47:38 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:38.937581 | orchestrator | 2025-09-19 11:47:38 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:38.938282 | orchestrator | 2025-09-19 11:47:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:41.985695 | orchestrator | 2025-09-19 11:47:41 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:41.991669 | orchestrator | 2025-09-19 11:47:41 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:41.992882 | orchestrator | 2025-09-19 11:47:41 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:41.995561 | orchestrator | 2025-09-19 11:47:41 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:41.995607 | orchestrator | 2025-09-19 11:47:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:45.047242 | orchestrator | 2025-09-19 11:47:45 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:45.049607 | orchestrator | 2025-09-19 11:47:45 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:45.051560 | orchestrator | 2025-09-19 11:47:45 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:45.053325 | orchestrator | 2025-09-19 11:47:45 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:45.053356 | orchestrator | 2025-09-19 11:47:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:48.090348 | orchestrator | 2025-09-19 11:47:48 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:48.090440 | orchestrator | 2025-09-19 11:47:48 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:48.090894 | orchestrator | 2025-09-19 11:47:48 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:48.094613 | orchestrator | 2025-09-19 11:47:48 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:48.094639 | orchestrator | 2025-09-19 11:47:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:51.146278 | orchestrator | 2025-09-19 11:47:51 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:51.148230 | orchestrator | 2025-09-19 11:47:51 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:51.150652 | orchestrator | 2025-09-19 11:47:51 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:51.152438 | orchestrator | 2025-09-19 11:47:51 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:51.152708 | orchestrator | 2025-09-19 11:47:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:54.193843 | orchestrator | 2025-09-19 11:47:54 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:54.194866 | orchestrator | 2025-09-19 11:47:54 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:54.196039 | orchestrator | 2025-09-19 11:47:54 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:54.197434 | orchestrator | 2025-09-19 11:47:54 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:54.197457 | orchestrator | 2025-09-19 11:47:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:47:57.268171 | orchestrator | 2025-09-19 11:47:57 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:47:57.268260 | orchestrator | 2025-09-19 11:47:57 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:47:57.269216 | orchestrator | 2025-09-19 11:47:57 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:47:57.270181 | orchestrator | 2025-09-19 11:47:57 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:47:57.270285 | orchestrator | 2025-09-19 11:47:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:00.329259 | orchestrator | 2025-09-19 11:48:00 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:00.332762 | orchestrator | 2025-09-19 11:48:00 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:48:00.333876 | orchestrator | 2025-09-19 11:48:00 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:00.335511 | orchestrator | 2025-09-19 11:48:00 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:00.335553 | orchestrator | 2025-09-19 11:48:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:03.368167 | orchestrator | 2025-09-19 11:48:03 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:03.368705 | orchestrator | 2025-09-19 11:48:03 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:48:03.371101 | orchestrator | 2025-09-19 11:48:03 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:03.371596 | orchestrator | 2025-09-19 11:48:03 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:03.371724 | orchestrator | 2025-09-19 11:48:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:06.405639 | orchestrator | 2025-09-19 11:48:06 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:06.407398 | orchestrator | 2025-09-19 11:48:06 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:48:06.408512 | orchestrator | 2025-09-19 11:48:06 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:06.410035 | orchestrator | 2025-09-19 11:48:06 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:06.410070 | orchestrator | 2025-09-19 11:48:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:09.456124 | orchestrator | 2025-09-19 11:48:09 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:09.457703 | orchestrator | 2025-09-19 11:48:09 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state STARTED 2025-09-19 11:48:09.459214 | orchestrator | 2025-09-19 11:48:09 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:09.460776 | orchestrator | 2025-09-19 11:48:09 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:09.460817 | orchestrator | 2025-09-19 11:48:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:12.489261 | orchestrator | 2025-09-19 11:48:12 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:12.490589 | orchestrator | 2025-09-19 11:48:12 | INFO  | Task 9e7138d1-7a9d-4625-b021-ad52325bae96 is in state SUCCESS 2025-09-19 11:48:12.492372 | orchestrator | 2025-09-19 11:48:12.492396 | orchestrator | 2025-09-19 11:48:12.492404 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:48:12.492429 | orchestrator | 2025-09-19 11:48:12.492437 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:48:12.492445 | orchestrator | Friday 19 September 2025 11:46:58 +0000 (0:00:00.201) 0:00:00.201 ****** 2025-09-19 11:48:12.492453 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:48:12.492461 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:48:12.492469 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:48:12.492477 | orchestrator | 2025-09-19 11:48:12.492485 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:48:12.492492 | orchestrator | Friday 19 September 2025 11:46:58 +0000 (0:00:00.354) 0:00:00.556 ****** 2025-09-19 11:48:12.492500 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-19 11:48:12.492508 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-19 11:48:12.492516 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-19 11:48:12.492523 | orchestrator | 2025-09-19 11:48:12.492542 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-19 11:48:12.492550 | orchestrator | 2025-09-19 11:48:12.492557 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-19 11:48:12.492565 | orchestrator | Friday 19 September 2025 11:46:59 +0000 (0:00:00.837) 0:00:01.394 ****** 2025-09-19 11:48:12.492574 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:48:12.492581 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:48:12.492589 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:48:12.492597 | orchestrator | 2025-09-19 11:48:12.492604 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:48:12.492612 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:48:12.492621 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:48:12.492629 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:48:12.492637 | orchestrator | 2025-09-19 11:48:12.492644 | orchestrator | 2025-09-19 11:48:12.492652 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:48:12.492660 | orchestrator | Friday 19 September 2025 11:47:00 +0000 (0:00:00.836) 0:00:02.231 ****** 2025-09-19 11:48:12.492668 | orchestrator | =============================================================================== 2025-09-19 11:48:12.492675 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-09-19 11:48:12.492683 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.84s 2025-09-19 11:48:12.492690 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-09-19 11:48:12.492698 | orchestrator | 2025-09-19 11:48:12.492706 | orchestrator | 2025-09-19 11:48:12.492713 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:48:12.492721 | orchestrator | 2025-09-19 11:48:12.492729 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:48:12.492736 | orchestrator | Friday 19 September 2025 11:46:27 +0000 (0:00:00.191) 0:00:00.191 ****** 2025-09-19 11:48:12.492744 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:48:12.492783 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:48:12.492792 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:48:12.492800 | orchestrator | 2025-09-19 11:48:12.492808 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:48:12.492815 | orchestrator | Friday 19 September 2025 11:46:27 +0000 (0:00:00.264) 0:00:00.455 ****** 2025-09-19 11:48:12.492823 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-19 11:48:12.493012 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-19 11:48:12.493021 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-19 11:48:12.493029 | orchestrator | 2025-09-19 11:48:12.493037 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-19 11:48:12.493097 | orchestrator | 2025-09-19 11:48:12.493106 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 11:48:12.493114 | orchestrator | Friday 19 September 2025 11:46:27 +0000 (0:00:00.340) 0:00:00.796 ****** 2025-09-19 11:48:12.493122 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:48:12.493130 | orchestrator | 2025-09-19 11:48:12.493138 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-19 11:48:12.493145 | orchestrator | Friday 19 September 2025 11:46:28 +0000 (0:00:00.450) 0:00:01.247 ****** 2025-09-19 11:48:12.493154 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-19 11:48:12.493162 | orchestrator | 2025-09-19 11:48:12.493169 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-19 11:48:12.493177 | orchestrator | Friday 19 September 2025 11:46:32 +0000 (0:00:03.813) 0:00:05.060 ****** 2025-09-19 11:48:12.493185 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-19 11:48:12.493193 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-19 11:48:12.493200 | orchestrator | 2025-09-19 11:48:12.493208 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-19 11:48:12.493216 | orchestrator | Friday 19 September 2025 11:46:38 +0000 (0:00:06.185) 0:00:11.246 ****** 2025-09-19 11:48:12.493224 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:48:12.493232 | orchestrator | 2025-09-19 11:48:12.493240 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-19 11:48:12.493248 | orchestrator | Friday 19 September 2025 11:46:41 +0000 (0:00:03.080) 0:00:14.327 ****** 2025-09-19 11:48:12.493263 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:48:12.493272 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-19 11:48:12.493279 | orchestrator | 2025-09-19 11:48:12.493287 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-19 11:48:12.493295 | orchestrator | Friday 19 September 2025 11:46:45 +0000 (0:00:03.767) 0:00:18.095 ****** 2025-09-19 11:48:12.493303 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:48:12.493311 | orchestrator | 2025-09-19 11:48:12.493318 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-19 11:48:12.493326 | orchestrator | Friday 19 September 2025 11:46:48 +0000 (0:00:02.805) 0:00:20.900 ****** 2025-09-19 11:48:12.493334 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-19 11:48:12.493341 | orchestrator | 2025-09-19 11:48:12.493349 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-19 11:48:12.493362 | orchestrator | Friday 19 September 2025 11:46:51 +0000 (0:00:03.264) 0:00:24.165 ****** 2025-09-19 11:48:12.493371 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:48:12.493378 | orchestrator | 2025-09-19 11:48:12.493386 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-19 11:48:12.493394 | orchestrator | Friday 19 September 2025 11:46:54 +0000 (0:00:02.804) 0:00:26.970 ****** 2025-09-19 11:48:12.493402 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:48:12.493410 | orchestrator | 2025-09-19 11:48:12.493418 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-19 11:48:12.493425 | orchestrator | Friday 19 September 2025 11:46:57 +0000 (0:00:03.537) 0:00:30.507 ****** 2025-09-19 11:48:12.493433 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:48:12.493441 | orchestrator | 2025-09-19 11:48:12.493449 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-19 11:48:12.493456 | orchestrator | Friday 19 September 2025 11:47:01 +0000 (0:00:03.714) 0:00:34.222 ****** 2025-09-19 11:48:12.493467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.493483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.493492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.493510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.493520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.493533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.493541 | orchestrator | 2025-09-19 11:48:12.493549 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-19 11:48:12.493557 | orchestrator | Friday 19 September 2025 11:47:02 +0000 (0:00:01.487) 0:00:35.709 ****** 2025-09-19 11:48:12.493565 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:48:12.493573 | orchestrator | 2025-09-19 11:48:12.493581 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-19 11:48:12.493589 | orchestrator | Friday 19 September 2025 11:47:02 +0000 (0:00:00.096) 0:00:35.806 ****** 2025-09-19 11:48:12.493596 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:48:12.493604 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:48:12.493612 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:48:12.493619 | orchestrator | 2025-09-19 11:48:12.493627 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-19 11:48:12.493635 | orchestrator | Friday 19 September 2025 11:47:03 +0000 (0:00:00.393) 0:00:36.199 ****** 2025-09-19 11:48:12.493643 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:48:12.493651 | orchestrator | 2025-09-19 11:48:12.493658 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-19 11:48:12.493666 | orchestrator | Friday 19 September 2025 11:47:04 +0000 (0:00:01.222) 0:00:37.421 ****** 2025-09-19 11:48:12.493674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.493690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.493704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.493713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.493721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.493730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.493739 | orchestrator | 2025-09-19 11:48:12.493748 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-19 11:48:12.493786 | orchestrator | Friday 19 September 2025 11:47:07 +0000 (0:00:02.759) 0:00:40.181 ****** 2025-09-19 11:48:12.493796 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:48:12.493805 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:48:12.493814 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:48:12.493823 | orchestrator | 2025-09-19 11:48:12.493832 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 11:48:12.493841 | orchestrator | Friday 19 September 2025 11:47:07 +0000 (0:00:00.283) 0:00:40.465 ****** 2025-09-19 11:48:12.493855 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:48:12.493864 | orchestrator | 2025-09-19 11:48:12.493873 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-19 11:48:12.493881 | orchestrator | Friday 19 September 2025 11:47:08 +0000 (0:00:00.665) 0:00:41.131 ****** 2025-09-19 11:48:12.493918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.493929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.493939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.493949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.493967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.493982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.493991 | orchestrator | 2025-09-19 11:48:12.494001 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-19 11:48:12.494010 | orchestrator | Friday 19 September 2025 11:47:10 +0000 (0:00:02.581) 0:00:43.713 ****** 2025-09-19 11:48:12.494068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:48:12.494077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:48:12.494086 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:48:12.494100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:48:12.494123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:48:12.494131 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:48:12.494139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:48:12.494148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:48:12.494156 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:48:12.494164 | orchestrator | 2025-09-19 11:48:12.494172 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-19 11:48:12.494180 | orchestrator | Friday 19 September 2025 11:47:11 +0000 (0:00:00.622) 0:00:44.335 ****** 2025-09-19 11:48:12.494188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:48:12.494207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:48:12.494216 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:48:12.494228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:48:12.494237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:48:12.494245 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:48:12.494253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:48:12.494261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:48:12.494275 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:48:12.494283 | orchestrator | 2025-09-19 11:48:12.494290 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-19 11:48:12.494298 | orchestrator | Friday 19 September 2025 11:47:12 +0000 (0:00:01.205) 0:00:45.540 ****** 2025-09-19 11:48:12.494315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.494324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.494333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.494341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.494358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.494370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.494378 | orchestrator | 2025-09-19 11:48:12.494386 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-19 11:48:12.494394 | orchestrator | Friday 19 September 2025 11:47:15 +0000 (0:00:02.532) 0:00:48.073 ****** 2025-09-19 11:48:12.494403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.494411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.494420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.494439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.494451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.494460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.494468 | orchestrator | 2025-09-19 11:48:12.494476 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-19 11:48:12.494484 | orchestrator | Friday 19 September 2025 11:47:20 +0000 (0:00:04.982) 0:00:53.056 ****** 2025-09-19 11:48:12.494492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:48:12.494505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:48:12.494514 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:48:12.494527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:48:12.494539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:48:12.494547 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:48:12.494555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 11:48:12.494564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:48:12.494576 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:48:12.494584 | orchestrator | 2025-09-19 11:48:12.494592 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-19 11:48:12.494600 | orchestrator | Friday 19 September 2025 11:47:21 +0000 (0:00:00.860) 0:00:53.916 ****** 2025-09-19 11:48:12.494613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.494625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.494633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 11:48:12.494642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.494654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.494666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:48:12.494675 | orchestrator | 2025-09-19 11:48:12.494683 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 11:48:12.494691 | orchestrator | Friday 19 September 2025 11:47:23 +0000 (0:00:02.270) 0:00:56.187 ****** 2025-09-19 11:48:12.494699 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:48:12.494707 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:48:12.494714 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:48:12.494722 | orchestrator | 2025-09-19 11:48:12.494730 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-19 11:48:12.494738 | orchestrator | Friday 19 September 2025 11:47:23 +0000 (0:00:00.244) 0:00:56.431 ****** 2025-09-19 11:48:12.494746 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:48:12.494768 | orchestrator | 2025-09-19 11:48:12.494776 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-19 11:48:12.494789 | orchestrator | Friday 19 September 2025 11:47:26 +0000 (0:00:02.389) 0:00:58.821 ****** 2025-09-19 11:48:12.494797 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:48:12.494805 | orchestrator | 2025-09-19 11:48:12.494813 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-19 11:48:12.494821 | orchestrator | Friday 19 September 2025 11:47:28 +0000 (0:00:02.423) 0:01:01.244 ****** 2025-09-19 11:48:12.494829 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:48:12.494837 | orchestrator | 2025-09-19 11:48:12.494844 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 11:48:12.494852 | orchestrator | Friday 19 September 2025 11:47:46 +0000 (0:00:17.984) 0:01:19.229 ****** 2025-09-19 11:48:12.494860 | orchestrator | 2025-09-19 11:48:12.494868 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 11:48:12.494876 | orchestrator | Friday 19 September 2025 11:47:46 +0000 (0:00:00.063) 0:01:19.293 ****** 2025-09-19 11:48:12.494883 | orchestrator | 2025-09-19 11:48:12.494891 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 11:48:12.494899 | orchestrator | Friday 19 September 2025 11:47:46 +0000 (0:00:00.060) 0:01:19.354 ****** 2025-09-19 11:48:12.494907 | orchestrator | 2025-09-19 11:48:12.494915 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-19 11:48:12.494927 | orchestrator | Friday 19 September 2025 11:47:46 +0000 (0:00:00.063) 0:01:19.417 ****** 2025-09-19 11:48:12.494935 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:48:12.494943 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:48:12.494951 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:48:12.494958 | orchestrator | 2025-09-19 11:48:12.494966 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-19 11:48:12.494974 | orchestrator | Friday 19 September 2025 11:48:01 +0000 (0:00:14.789) 0:01:34.206 ****** 2025-09-19 11:48:12.494982 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:48:12.494990 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:48:12.494998 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:48:12.495006 | orchestrator | 2025-09-19 11:48:12.495014 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:48:12.495022 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 11:48:12.495030 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:48:12.495038 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:48:12.495046 | orchestrator | 2025-09-19 11:48:12.495054 | orchestrator | 2025-09-19 11:48:12.495062 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:48:12.495069 | orchestrator | Friday 19 September 2025 11:48:10 +0000 (0:00:09.241) 0:01:43.448 ****** 2025-09-19 11:48:12.495077 | orchestrator | =============================================================================== 2025-09-19 11:48:12.495085 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.98s 2025-09-19 11:48:12.495093 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.79s 2025-09-19 11:48:12.495100 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.24s 2025-09-19 11:48:12.495108 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.19s 2025-09-19 11:48:12.495116 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.98s 2025-09-19 11:48:12.495124 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.81s 2025-09-19 11:48:12.495132 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.77s 2025-09-19 11:48:12.495139 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.71s 2025-09-19 11:48:12.495147 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.54s 2025-09-19 11:48:12.495155 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.26s 2025-09-19 11:48:12.495162 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.08s 2025-09-19 11:48:12.495170 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 2.81s 2025-09-19 11:48:12.495178 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.80s 2025-09-19 11:48:12.495186 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.76s 2025-09-19 11:48:12.495194 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.58s 2025-09-19 11:48:12.495202 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.53s 2025-09-19 11:48:12.495213 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.42s 2025-09-19 11:48:12.495221 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.39s 2025-09-19 11:48:12.495229 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.27s 2025-09-19 11:48:12.495237 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.49s 2025-09-19 11:48:12.495249 | orchestrator | 2025-09-19 11:48:12 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:12.495257 | orchestrator | 2025-09-19 11:48:12 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:12.495265 | orchestrator | 2025-09-19 11:48:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:15.532976 | orchestrator | 2025-09-19 11:48:15 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:15.534316 | orchestrator | 2025-09-19 11:48:15 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:15.535983 | orchestrator | 2025-09-19 11:48:15 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:15.536117 | orchestrator | 2025-09-19 11:48:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:18.569069 | orchestrator | 2025-09-19 11:48:18 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:18.569164 | orchestrator | 2025-09-19 11:48:18 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:18.569506 | orchestrator | 2025-09-19 11:48:18 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:18.569529 | orchestrator | 2025-09-19 11:48:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:21.598661 | orchestrator | 2025-09-19 11:48:21 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:21.599179 | orchestrator | 2025-09-19 11:48:21 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:21.600395 | orchestrator | 2025-09-19 11:48:21 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:21.600419 | orchestrator | 2025-09-19 11:48:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:24.622728 | orchestrator | 2025-09-19 11:48:24 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:24.624264 | orchestrator | 2025-09-19 11:48:24 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:24.627250 | orchestrator | 2025-09-19 11:48:24 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:24.627698 | orchestrator | 2025-09-19 11:48:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:27.660137 | orchestrator | 2025-09-19 11:48:27 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:27.660411 | orchestrator | 2025-09-19 11:48:27 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:27.661020 | orchestrator | 2025-09-19 11:48:27 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:27.661043 | orchestrator | 2025-09-19 11:48:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:30.704881 | orchestrator | 2025-09-19 11:48:30 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:30.706747 | orchestrator | 2025-09-19 11:48:30 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:30.708994 | orchestrator | 2025-09-19 11:48:30 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:30.709020 | orchestrator | 2025-09-19 11:48:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:33.747136 | orchestrator | 2025-09-19 11:48:33 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:33.748504 | orchestrator | 2025-09-19 11:48:33 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:33.750478 | orchestrator | 2025-09-19 11:48:33 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:33.750506 | orchestrator | 2025-09-19 11:48:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:36.793940 | orchestrator | 2025-09-19 11:48:36 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:36.796365 | orchestrator | 2025-09-19 11:48:36 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:36.798313 | orchestrator | 2025-09-19 11:48:36 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:36.798350 | orchestrator | 2025-09-19 11:48:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:39.839151 | orchestrator | 2025-09-19 11:48:39 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:39.841004 | orchestrator | 2025-09-19 11:48:39 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:39.842502 | orchestrator | 2025-09-19 11:48:39 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:39.842524 | orchestrator | 2025-09-19 11:48:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:42.883585 | orchestrator | 2025-09-19 11:48:42 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state STARTED 2025-09-19 11:48:42.885565 | orchestrator | 2025-09-19 11:48:42 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:42.887508 | orchestrator | 2025-09-19 11:48:42 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:42.887908 | orchestrator | 2025-09-19 11:48:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:45.929325 | orchestrator | 2025-09-19 11:48:45 | INFO  | Task cacf384c-5549-49af-b435-6ebdaaa5a1bb is in state SUCCESS 2025-09-19 11:48:45.931115 | orchestrator | 2025-09-19 11:48:45.931246 | orchestrator | 2025-09-19 11:48:45.931266 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:48:45.931279 | orchestrator | 2025-09-19 11:48:45.931291 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:48:45.931302 | orchestrator | Friday 19 September 2025 11:46:31 +0000 (0:00:00.266) 0:00:00.267 ****** 2025-09-19 11:48:45.931417 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:48:45.931430 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:48:45.931441 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:48:45.931452 | orchestrator | 2025-09-19 11:48:45.931463 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:48:45.931474 | orchestrator | Friday 19 September 2025 11:46:32 +0000 (0:00:00.266) 0:00:00.533 ****** 2025-09-19 11:48:45.931485 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-19 11:48:45.931496 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-19 11:48:45.931507 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-19 11:48:45.931518 | orchestrator | 2025-09-19 11:48:45.931529 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-19 11:48:45.931540 | orchestrator | 2025-09-19 11:48:45.931551 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-19 11:48:45.931562 | orchestrator | Friday 19 September 2025 11:46:32 +0000 (0:00:00.338) 0:00:00.872 ****** 2025-09-19 11:48:45.931573 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:48:45.931585 | orchestrator | 2025-09-19 11:48:45.931596 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-19 11:48:45.931607 | orchestrator | Friday 19 September 2025 11:46:33 +0000 (0:00:00.663) 0:00:01.536 ****** 2025-09-19 11:48:45.931647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.931663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.932062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.932085 | orchestrator | 2025-09-19 11:48:45.932096 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-19 11:48:45.932107 | orchestrator | Friday 19 September 2025 11:46:34 +0000 (0:00:00.864) 0:00:02.400 ****** 2025-09-19 11:48:45.932118 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-19 11:48:45.932141 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-19 11:48:45.932152 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:48:45.932162 | orchestrator | 2025-09-19 11:48:45.932174 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-19 11:48:45.932185 | orchestrator | Friday 19 September 2025 11:46:35 +0000 (0:00:01.053) 0:00:03.454 ****** 2025-09-19 11:48:45.932195 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:48:45.932206 | orchestrator | 2025-09-19 11:48:45.932216 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-19 11:48:45.932227 | orchestrator | Friday 19 September 2025 11:46:36 +0000 (0:00:00.938) 0:00:04.393 ****** 2025-09-19 11:48:45.932252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.932264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.932285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.932296 | orchestrator | 2025-09-19 11:48:45.932307 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-19 11:48:45.932318 | orchestrator | Friday 19 September 2025 11:46:37 +0000 (0:00:01.758) 0:00:06.151 ****** 2025-09-19 11:48:45.932329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:48:45.932340 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:48:45.932357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:48:45.932368 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:48:45.932389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:48:45.932402 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:48:45.932420 | orchestrator | 2025-09-19 11:48:45.933006 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-19 11:48:45.933052 | orchestrator | Friday 19 September 2025 11:46:38 +0000 (0:00:00.332) 0:00:06.483 ****** 2025-09-19 11:48:45.933064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:48:45.933077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:48:45.933088 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:48:45.933099 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:48:45.933111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 11:48:45.933122 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:48:45.933133 | orchestrator | 2025-09-19 11:48:45.933144 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-19 11:48:45.933154 | orchestrator | Friday 19 September 2025 11:46:38 +0000 (0:00:00.583) 0:00:07.067 ****** 2025-09-19 11:48:45.933166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.933226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.933249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.933261 | orchestrator | 2025-09-19 11:48:45.933272 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-19 11:48:45.933282 | orchestrator | Friday 19 September 2025 11:46:39 +0000 (0:00:01.239) 0:00:08.307 ****** 2025-09-19 11:48:45.933293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.933305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.933317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.933328 | orchestrator | 2025-09-19 11:48:45.933339 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-19 11:48:45.933349 | orchestrator | Friday 19 September 2025 11:46:41 +0000 (0:00:01.341) 0:00:09.648 ****** 2025-09-19 11:48:45.933360 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:48:45.933371 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:48:45.933381 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:48:45.933392 | orchestrator | 2025-09-19 11:48:45.933403 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-19 11:48:45.933413 | orchestrator | Friday 19 September 2025 11:46:41 +0000 (0:00:00.391) 0:00:10.039 ****** 2025-09-19 11:48:45.933424 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 11:48:45.933439 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 11:48:45.933457 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 11:48:45.933467 | orchestrator | 2025-09-19 11:48:45.933478 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-19 11:48:45.933489 | orchestrator | Friday 19 September 2025 11:46:42 +0000 (0:00:01.233) 0:00:11.273 ****** 2025-09-19 11:48:45.933500 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 11:48:45.933539 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 11:48:45.933552 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 11:48:45.933563 | orchestrator | 2025-09-19 11:48:45.933574 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-19 11:48:45.933585 | orchestrator | Friday 19 September 2025 11:46:44 +0000 (0:00:01.278) 0:00:12.551 ****** 2025-09-19 11:48:45.933596 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:48:45.933608 | orchestrator | 2025-09-19 11:48:45.933621 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-19 11:48:45.933633 | orchestrator | Friday 19 September 2025 11:46:44 +0000 (0:00:00.732) 0:00:13.283 ****** 2025-09-19 11:48:45.933645 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-19 11:48:45.933657 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-19 11:48:45.933670 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:48:45.933681 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:48:45.933693 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:48:45.933705 | orchestrator | 2025-09-19 11:48:45.933759 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-19 11:48:45.933771 | orchestrator | Friday 19 September 2025 11:46:45 +0000 (0:00:00.621) 0:00:13.905 ****** 2025-09-19 11:48:45.933784 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:48:45.933795 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:48:45.933806 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:48:45.933816 | orchestrator | 2025-09-19 11:48:45.933827 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-19 11:48:45.933838 | orchestrator | Friday 19 September 2025 11:46:45 +0000 (0:00:00.402) 0:00:14.308 ****** 2025-09-19 11:48:45.933849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090152, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.079534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.933861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090152, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.079534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.933872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090152, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.079534, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.933898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090169, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0978067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.933949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090169, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0978067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.933963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090169, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0978067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.933975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090155, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0814738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.933988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090155, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0814738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090155, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0814738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090170, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1005502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090170, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1005502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090170, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1005502, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090160, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0848253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090160, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0848253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090160, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0848253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090165, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0900612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090165, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0900612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090165, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0900612, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090151, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.078345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090151, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.078345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090151, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.078345, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090153, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0795498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090153, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0795498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090153, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0795498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090156, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0814738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090156, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0814738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090156, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0814738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090162, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.08755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090162, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.08755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090162, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.08755, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090168, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0978067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090168, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0978067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090168, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0978067, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090154, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.080851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090154, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.080851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090154, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.080851, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090164, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0885499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090164, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0885499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090164, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0885499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090161, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0865498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090161, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0865498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090161, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0865498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090159, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0836587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090159, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0836587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090159, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0836587, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090158, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0834274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090158, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0834274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090158, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0834274, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090163, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0885499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090163, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0885499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090163, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0885499, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090157, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090157, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.934988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090157, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0822, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090167, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0909014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090167, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0909014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090167, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.0909014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090208, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1398573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090208, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1398573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090208, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1398573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090178, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1115503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090178, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1115503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090178, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1115503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090175, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1046734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090175, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1046734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090175, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1046734, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090185, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1185505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090185, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1185505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090185, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1185505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090172, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1023011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090172, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1023011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090172, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1023011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090193, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1308959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090193, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1308959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090193, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1308959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090186, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.126716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090186, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.126716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090186, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.126716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090196, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1308959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090196, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1308959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090196, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1308959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090204, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1375508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090204, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1375508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090204, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1375508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090191, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1292813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090191, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1292813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090191, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1292813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090182, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1145504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090182, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1145504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090182, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1145504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090177, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.107849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090177, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.107849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090177, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.107849, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090181, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1135504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090181, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1135504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090181, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1135504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090176, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.106768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090176, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.106768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090176, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.106768, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090183, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1165504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090183, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1165504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090183, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1165504, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090201, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1375508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090201, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1375508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090201, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1375508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090199, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1345508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090199, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1345508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090199, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1345508, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090173, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1029723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090173, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1029723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090173, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1029723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090174, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.104128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090174, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.104128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090174, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.104128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090189, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1279843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090189, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1279843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090189, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1279843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090197, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1318927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090197, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1318927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090197, 'dev': 124, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1758279452.1318927, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 11:48:45.935889 | orchestrator | 2025-09-19 11:48:45.935900 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-19 11:48:45.935910 | orchestrator | Friday 19 September 2025 11:47:21 +0000 (0:00:35.613) 0:00:49.922 ****** 2025-09-19 11:48:45.935920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.935930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.935944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 11:48:45.935960 | orchestrator | 2025-09-19 11:48:45.935970 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-19 11:48:45.935980 | orchestrator | Friday 19 September 2025 11:47:22 +0000 (0:00:01.098) 0:00:51.020 ****** 2025-09-19 11:48:45.935990 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:48:45.935999 | orchestrator | 2025-09-19 11:48:45.936009 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-19 11:48:45.936024 | orchestrator | Friday 19 September 2025 11:47:25 +0000 (0:00:02.398) 0:00:53.419 ****** 2025-09-19 11:48:45.936033 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:48:45.936043 | orchestrator | 2025-09-19 11:48:45.936053 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 11:48:45.936062 | orchestrator | Friday 19 September 2025 11:47:27 +0000 (0:00:02.627) 0:00:56.047 ****** 2025-09-19 11:48:45.936072 | orchestrator | 2025-09-19 11:48:45.936081 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 11:48:45.936091 | orchestrator | Friday 19 September 2025 11:47:27 +0000 (0:00:00.221) 0:00:56.268 ****** 2025-09-19 11:48:45.936101 | orchestrator | 2025-09-19 11:48:45.936110 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 11:48:45.936120 | orchestrator | Friday 19 September 2025 11:47:28 +0000 (0:00:00.063) 0:00:56.332 ****** 2025-09-19 11:48:45.936129 | orchestrator | 2025-09-19 11:48:45.936139 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-19 11:48:45.936148 | orchestrator | Friday 19 September 2025 11:47:28 +0000 (0:00:00.080) 0:00:56.412 ****** 2025-09-19 11:48:45.936158 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:48:45.936168 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:48:45.936177 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:48:45.936187 | orchestrator | 2025-09-19 11:48:45.936196 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-19 11:48:45.936206 | orchestrator | Friday 19 September 2025 11:47:30 +0000 (0:00:01.919) 0:00:58.332 ****** 2025-09-19 11:48:45.936215 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:48:45.936225 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:48:45.936235 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-19 11:48:45.936244 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-19 11:48:45.936254 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-19 11:48:45.936263 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:48:45.936273 | orchestrator | 2025-09-19 11:48:45.936283 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-19 11:48:45.936292 | orchestrator | Friday 19 September 2025 11:48:08 +0000 (0:00:38.746) 0:01:37.078 ****** 2025-09-19 11:48:45.936302 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:48:45.936311 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:48:45.936321 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:48:45.936331 | orchestrator | 2025-09-19 11:48:45.936340 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-19 11:48:45.936350 | orchestrator | Friday 19 September 2025 11:48:39 +0000 (0:00:30.653) 0:02:07.731 ****** 2025-09-19 11:48:45.936360 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:48:45.936369 | orchestrator | 2025-09-19 11:48:45.936385 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-19 11:48:45.936395 | orchestrator | Friday 19 September 2025 11:48:41 +0000 (0:00:02.269) 0:02:10.002 ****** 2025-09-19 11:48:45.936404 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:48:45.936414 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:48:45.936423 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:48:45.936433 | orchestrator | 2025-09-19 11:48:45.936443 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-19 11:48:45.936452 | orchestrator | Friday 19 September 2025 11:48:42 +0000 (0:00:00.392) 0:02:10.394 ****** 2025-09-19 11:48:45.936463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-19 11:48:45.936474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-19 11:48:45.936484 | orchestrator | 2025-09-19 11:48:45.936494 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-19 11:48:45.936504 | orchestrator | Friday 19 September 2025 11:48:44 +0000 (0:00:02.807) 0:02:13.202 ****** 2025-09-19 11:48:45.936513 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:48:45.936523 | orchestrator | 2025-09-19 11:48:45.936533 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:48:45.936542 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 11:48:45.936557 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 11:48:45.936567 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 11:48:45.936577 | orchestrator | 2025-09-19 11:48:45.936586 | orchestrator | 2025-09-19 11:48:45.936596 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:48:45.936606 | orchestrator | Friday 19 September 2025 11:48:45 +0000 (0:00:00.237) 0:02:13.439 ****** 2025-09-19 11:48:45.936615 | orchestrator | =============================================================================== 2025-09-19 11:48:45.936630 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.75s 2025-09-19 11:48:45.936640 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 35.61s 2025-09-19 11:48:45.936650 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.65s 2025-09-19 11:48:45.936659 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.81s 2025-09-19 11:48:45.936669 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.63s 2025-09-19 11:48:45.936679 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.40s 2025-09-19 11:48:45.936688 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.27s 2025-09-19 11:48:45.936698 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.92s 2025-09-19 11:48:45.936718 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.76s 2025-09-19 11:48:45.936728 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.34s 2025-09-19 11:48:45.936738 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.28s 2025-09-19 11:48:45.936747 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.24s 2025-09-19 11:48:45.936762 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.23s 2025-09-19 11:48:45.936772 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.10s 2025-09-19 11:48:45.936782 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.05s 2025-09-19 11:48:45.936791 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.94s 2025-09-19 11:48:45.936801 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.86s 2025-09-19 11:48:45.936810 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.73s 2025-09-19 11:48:45.936820 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.66s 2025-09-19 11:48:45.936829 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.62s 2025-09-19 11:48:45.936839 | orchestrator | 2025-09-19 11:48:45 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:45.936849 | orchestrator | 2025-09-19 11:48:45 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:45.936858 | orchestrator | 2025-09-19 11:48:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:48.974233 | orchestrator | 2025-09-19 11:48:48 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:48.976129 | orchestrator | 2025-09-19 11:48:48 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:48.976166 | orchestrator | 2025-09-19 11:48:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:52.015545 | orchestrator | 2025-09-19 11:48:52 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:52.015618 | orchestrator | 2025-09-19 11:48:52 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:52.015631 | orchestrator | 2025-09-19 11:48:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:55.049736 | orchestrator | 2025-09-19 11:48:55 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:55.051333 | orchestrator | 2025-09-19 11:48:55 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:55.051357 | orchestrator | 2025-09-19 11:48:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:48:58.079259 | orchestrator | 2025-09-19 11:48:58 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:48:58.080277 | orchestrator | 2025-09-19 11:48:58 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:48:58.080349 | orchestrator | 2025-09-19 11:48:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:01.131420 | orchestrator | 2025-09-19 11:49:01 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state STARTED 2025-09-19 11:49:01.131966 | orchestrator | 2025-09-19 11:49:01 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:01.131984 | orchestrator | 2025-09-19 11:49:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:04.174074 | orchestrator | 2025-09-19 11:49:04 | INFO  | Task 9307508d-ebd4-46ad-928d-cf494cb040ba is in state SUCCESS 2025-09-19 11:49:04.175608 | orchestrator | 2025-09-19 11:49:04.175653 | orchestrator | 2025-09-19 11:49:04.175667 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:49:04.176251 | orchestrator | 2025-09-19 11:49:04.176266 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-19 11:49:04.176276 | orchestrator | Friday 19 September 2025 11:40:21 +0000 (0:00:00.440) 0:00:00.440 ****** 2025-09-19 11:49:04.176286 | orchestrator | changed: [testbed-manager] 2025-09-19 11:49:04.176297 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.176327 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:49:04.176337 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:49:04.176347 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:49:04.176356 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:49:04.176366 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:49:04.176375 | orchestrator | 2025-09-19 11:49:04.176385 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:49:04.176427 | orchestrator | Friday 19 September 2025 11:40:22 +0000 (0:00:00.995) 0:00:01.435 ****** 2025-09-19 11:49:04.176439 | orchestrator | changed: [testbed-manager] 2025-09-19 11:49:04.176448 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.176458 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:49:04.176468 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:49:04.176477 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:49:04.176532 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:49:04.176924 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:49:04.176937 | orchestrator | 2025-09-19 11:49:04.176947 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:49:04.176957 | orchestrator | Friday 19 September 2025 11:40:23 +0000 (0:00:00.701) 0:00:02.137 ****** 2025-09-19 11:49:04.176968 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-19 11:49:04.176977 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-19 11:49:04.176987 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-19 11:49:04.176997 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-19 11:49:04.177006 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-19 11:49:04.177015 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-19 11:49:04.177025 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-19 11:49:04.177034 | orchestrator | 2025-09-19 11:49:04.177044 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-19 11:49:04.177053 | orchestrator | 2025-09-19 11:49:04.177063 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-19 11:49:04.177072 | orchestrator | Friday 19 September 2025 11:40:24 +0000 (0:00:01.017) 0:00:03.155 ****** 2025-09-19 11:49:04.177082 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:49:04.177091 | orchestrator | 2025-09-19 11:49:04.177101 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-19 11:49:04.177110 | orchestrator | Friday 19 September 2025 11:40:25 +0000 (0:00:00.590) 0:00:03.745 ****** 2025-09-19 11:49:04.177120 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-19 11:49:04.177129 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-19 11:49:04.177139 | orchestrator | 2025-09-19 11:49:04.177148 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-19 11:49:04.177158 | orchestrator | Friday 19 September 2025 11:40:29 +0000 (0:00:04.186) 0:00:07.931 ****** 2025-09-19 11:49:04.177167 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:49:04.177177 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 11:49:04.177187 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.177196 | orchestrator | 2025-09-19 11:49:04.177264 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-19 11:49:04.177276 | orchestrator | Friday 19 September 2025 11:40:33 +0000 (0:00:04.099) 0:00:12.032 ****** 2025-09-19 11:49:04.177521 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.177532 | orchestrator | 2025-09-19 11:49:04.177542 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-19 11:49:04.177551 | orchestrator | Friday 19 September 2025 11:40:34 +0000 (0:00:00.735) 0:00:12.767 ****** 2025-09-19 11:49:04.177561 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.177631 | orchestrator | 2025-09-19 11:49:04.178191 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-19 11:49:04.178225 | orchestrator | Friday 19 September 2025 11:40:35 +0000 (0:00:01.567) 0:00:14.334 ****** 2025-09-19 11:49:04.178235 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.178245 | orchestrator | 2025-09-19 11:49:04.178255 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 11:49:04.178264 | orchestrator | Friday 19 September 2025 11:40:39 +0000 (0:00:03.768) 0:00:18.103 ****** 2025-09-19 11:49:04.178274 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.178283 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.178293 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.178302 | orchestrator | 2025-09-19 11:49:04.178312 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-19 11:49:04.178321 | orchestrator | Friday 19 September 2025 11:40:40 +0000 (0:00:00.387) 0:00:18.491 ****** 2025-09-19 11:49:04.178331 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:49:04.178341 | orchestrator | 2025-09-19 11:49:04.178351 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-19 11:49:04.178360 | orchestrator | Friday 19 September 2025 11:41:09 +0000 (0:00:29.912) 0:00:48.403 ****** 2025-09-19 11:49:04.178369 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.178379 | orchestrator | 2025-09-19 11:49:04.178389 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 11:49:04.178398 | orchestrator | Friday 19 September 2025 11:41:23 +0000 (0:00:14.028) 0:01:02.432 ****** 2025-09-19 11:49:04.178408 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:49:04.178417 | orchestrator | 2025-09-19 11:49:04.178427 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 11:49:04.178446 | orchestrator | Friday 19 September 2025 11:41:35 +0000 (0:00:11.655) 0:01:14.087 ****** 2025-09-19 11:49:04.178539 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:49:04.178553 | orchestrator | 2025-09-19 11:49:04.178563 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-19 11:49:04.178573 | orchestrator | Friday 19 September 2025 11:41:36 +0000 (0:00:01.343) 0:01:15.430 ****** 2025-09-19 11:49:04.178582 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.178591 | orchestrator | 2025-09-19 11:49:04.178601 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 11:49:04.178610 | orchestrator | Friday 19 September 2025 11:41:37 +0000 (0:00:00.486) 0:01:15.917 ****** 2025-09-19 11:49:04.178620 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:49:04.178630 | orchestrator | 2025-09-19 11:49:04.178665 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-19 11:49:04.178678 | orchestrator | Friday 19 September 2025 11:41:37 +0000 (0:00:00.520) 0:01:16.437 ****** 2025-09-19 11:49:04.178726 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:49:04.178745 | orchestrator | 2025-09-19 11:49:04.178761 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-19 11:49:04.178778 | orchestrator | Friday 19 September 2025 11:41:55 +0000 (0:00:17.985) 0:01:34.423 ****** 2025-09-19 11:49:04.178794 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.178810 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.178826 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.178841 | orchestrator | 2025-09-19 11:49:04.178857 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-19 11:49:04.178875 | orchestrator | 2025-09-19 11:49:04.178891 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-19 11:49:04.178907 | orchestrator | Friday 19 September 2025 11:41:56 +0000 (0:00:00.463) 0:01:34.886 ****** 2025-09-19 11:49:04.178917 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:49:04.178926 | orchestrator | 2025-09-19 11:49:04.178936 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-19 11:49:04.178945 | orchestrator | Friday 19 September 2025 11:41:57 +0000 (0:00:01.233) 0:01:36.119 ****** 2025-09-19 11:49:04.178967 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.178977 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.178986 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.178996 | orchestrator | 2025-09-19 11:49:04.179005 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-19 11:49:04.179015 | orchestrator | Friday 19 September 2025 11:42:00 +0000 (0:00:02.394) 0:01:38.513 ****** 2025-09-19 11:49:04.179024 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.179033 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.179043 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.179052 | orchestrator | 2025-09-19 11:49:04.179061 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-19 11:49:04.179071 | orchestrator | Friday 19 September 2025 11:42:02 +0000 (0:00:02.479) 0:01:40.993 ****** 2025-09-19 11:49:04.179080 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.179089 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.179098 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.179108 | orchestrator | 2025-09-19 11:49:04.179117 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-19 11:49:04.179127 | orchestrator | Friday 19 September 2025 11:42:03 +0000 (0:00:00.658) 0:01:41.651 ****** 2025-09-19 11:49:04.179136 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 11:49:04.179146 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.179155 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 11:49:04.179164 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.179173 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-19 11:49:04.179183 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-19 11:49:04.179192 | orchestrator | 2025-09-19 11:49:04.179201 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-19 11:49:04.179211 | orchestrator | Friday 19 September 2025 11:42:12 +0000 (0:00:08.910) 0:01:50.562 ****** 2025-09-19 11:49:04.179220 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.179229 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.179238 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.179248 | orchestrator | 2025-09-19 11:49:04.179257 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-19 11:49:04.179267 | orchestrator | Friday 19 September 2025 11:42:12 +0000 (0:00:00.633) 0:01:51.197 ****** 2025-09-19 11:49:04.179276 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 11:49:04.179286 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.179295 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 11:49:04.179304 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.179313 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 11:49:04.179323 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.179332 | orchestrator | 2025-09-19 11:49:04.179342 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-19 11:49:04.179351 | orchestrator | Friday 19 September 2025 11:42:13 +0000 (0:00:01.094) 0:01:52.291 ****** 2025-09-19 11:49:04.179360 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.179369 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.179379 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.179388 | orchestrator | 2025-09-19 11:49:04.179397 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-19 11:49:04.179407 | orchestrator | Friday 19 September 2025 11:42:14 +0000 (0:00:00.529) 0:01:52.821 ****** 2025-09-19 11:49:04.179416 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.179425 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.179435 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.179444 | orchestrator | 2025-09-19 11:49:04.179453 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-19 11:49:04.179469 | orchestrator | Friday 19 September 2025 11:42:15 +0000 (0:00:01.026) 0:01:53.848 ****** 2025-09-19 11:49:04.179485 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.179495 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.179589 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.179603 | orchestrator | 2025-09-19 11:49:04.179613 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-19 11:49:04.179623 | orchestrator | Friday 19 September 2025 11:42:17 +0000 (0:00:02.030) 0:01:55.878 ****** 2025-09-19 11:49:04.179633 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.179642 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.179652 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:49:04.179661 | orchestrator | 2025-09-19 11:49:04.179671 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 11:49:04.179680 | orchestrator | Friday 19 September 2025 11:42:37 +0000 (0:00:20.275) 0:02:16.153 ****** 2025-09-19 11:49:04.179730 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.179741 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.179750 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:49:04.179765 | orchestrator | 2025-09-19 11:49:04.179782 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 11:49:04.179797 | orchestrator | Friday 19 September 2025 11:42:50 +0000 (0:00:12.999) 0:02:29.153 ****** 2025-09-19 11:49:04.179813 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:49:04.179831 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.179848 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.179863 | orchestrator | 2025-09-19 11:49:04.179873 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-19 11:49:04.179882 | orchestrator | Friday 19 September 2025 11:42:51 +0000 (0:00:00.848) 0:02:30.002 ****** 2025-09-19 11:49:04.179892 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.179901 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.179910 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.179919 | orchestrator | 2025-09-19 11:49:04.179929 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-19 11:49:04.179939 | orchestrator | Friday 19 September 2025 11:43:03 +0000 (0:00:12.142) 0:02:42.145 ****** 2025-09-19 11:49:04.179948 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.179977 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.179987 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.179997 | orchestrator | 2025-09-19 11:49:04.180006 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-19 11:49:04.180016 | orchestrator | Friday 19 September 2025 11:43:06 +0000 (0:00:02.371) 0:02:44.517 ****** 2025-09-19 11:49:04.180025 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.180034 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.180044 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.180053 | orchestrator | 2025-09-19 11:49:04.180063 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-19 11:49:04.180072 | orchestrator | 2025-09-19 11:49:04.180081 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 11:49:04.180091 | orchestrator | Friday 19 September 2025 11:43:06 +0000 (0:00:00.308) 0:02:44.825 ****** 2025-09-19 11:49:04.180100 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:49:04.180110 | orchestrator | 2025-09-19 11:49:04.180120 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-19 11:49:04.180129 | orchestrator | Friday 19 September 2025 11:43:06 +0000 (0:00:00.535) 0:02:45.360 ****** 2025-09-19 11:49:04.180139 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-19 11:49:04.180148 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-19 11:49:04.180158 | orchestrator | 2025-09-19 11:49:04.180167 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-19 11:49:04.180185 | orchestrator | Friday 19 September 2025 11:43:10 +0000 (0:00:03.769) 0:02:49.129 ****** 2025-09-19 11:49:04.180195 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-19 11:49:04.180206 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-19 11:49:04.180215 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-19 11:49:04.180225 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-19 11:49:04.180235 | orchestrator | 2025-09-19 11:49:04.180244 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-19 11:49:04.180254 | orchestrator | Friday 19 September 2025 11:43:18 +0000 (0:00:07.475) 0:02:56.605 ****** 2025-09-19 11:49:04.180263 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:49:04.180273 | orchestrator | 2025-09-19 11:49:04.180282 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-19 11:49:04.180292 | orchestrator | Friday 19 September 2025 11:43:21 +0000 (0:00:03.638) 0:03:00.243 ****** 2025-09-19 11:49:04.180301 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:49:04.180311 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-19 11:49:04.180320 | orchestrator | 2025-09-19 11:49:04.180330 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-19 11:49:04.180339 | orchestrator | Friday 19 September 2025 11:43:25 +0000 (0:00:04.175) 0:03:04.419 ****** 2025-09-19 11:49:04.180349 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:49:04.180358 | orchestrator | 2025-09-19 11:49:04.180368 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-19 11:49:04.180377 | orchestrator | Friday 19 September 2025 11:43:29 +0000 (0:00:03.345) 0:03:07.764 ****** 2025-09-19 11:49:04.180386 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-19 11:49:04.180396 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-19 11:49:04.180405 | orchestrator | 2025-09-19 11:49:04.180420 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-19 11:49:04.180508 | orchestrator | Friday 19 September 2025 11:43:37 +0000 (0:00:08.480) 0:03:16.245 ****** 2025-09-19 11:49:04.180527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.180543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.180562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.180573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.180651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.180667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.180705 | orchestrator | 2025-09-19 11:49:04.180721 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-19 11:49:04.180731 | orchestrator | Friday 19 September 2025 11:43:40 +0000 (0:00:02.293) 0:03:18.538 ****** 2025-09-19 11:49:04.180740 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.180750 | orchestrator | 2025-09-19 11:49:04.180764 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-19 11:49:04.180781 | orchestrator | Friday 19 September 2025 11:43:40 +0000 (0:00:00.141) 0:03:18.680 ****** 2025-09-19 11:49:04.180797 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.180813 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.180831 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.180846 | orchestrator | 2025-09-19 11:49:04.180859 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-19 11:49:04.180869 | orchestrator | Friday 19 September 2025 11:43:40 +0000 (0:00:00.409) 0:03:19.089 ****** 2025-09-19 11:49:04.180878 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 11:49:04.180888 | orchestrator | 2025-09-19 11:49:04.180897 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-19 11:49:04.180906 | orchestrator | Friday 19 September 2025 11:43:41 +0000 (0:00:01.277) 0:03:20.367 ****** 2025-09-19 11:49:04.180916 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.180925 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.180934 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.180961 | orchestrator | 2025-09-19 11:49:04.180971 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 11:49:04.180980 | orchestrator | Friday 19 September 2025 11:43:42 +0000 (0:00:00.853) 0:03:21.221 ****** 2025-09-19 11:49:04.180989 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:49:04.180999 | orchestrator | 2025-09-19 11:49:04.181008 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-19 11:49:04.181018 | orchestrator | Friday 19 September 2025 11:43:44 +0000 (0:00:01.570) 0:03:22.792 ****** 2025-09-19 11:49:04.181070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.181084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.181110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.181122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.181132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.181173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.181186 | orchestrator | 2025-09-19 11:49:04.181195 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-19 11:49:04.181205 | orchestrator | Friday 19 September 2025 11:43:47 +0000 (0:00:03.338) 0:03:26.130 ****** 2025-09-19 11:49:04.181215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:49:04.181231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.181242 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.181252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:49:04.181292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.181306 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.181318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:49:04.181337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.181349 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.181360 | orchestrator | 2025-09-19 11:49:04.181371 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-19 11:49:04.181383 | orchestrator | Friday 19 September 2025 11:43:48 +0000 (0:00:00.532) 0:03:26.662 ****** 2025-09-19 11:49:04.181394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:49:04.181407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.181422 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.181461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:49:04.181481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.181493 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.181504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:49:04.181518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.181529 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.181540 | orchestrator | 2025-09-19 11:49:04.181552 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-19 11:49:04.181562 | orchestrator | Friday 19 September 2025 11:43:49 +0000 (0:00:01.056) 0:03:27.719 ****** 2025-09-19 11:49:04.181612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.181628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.181641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.181679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.181748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.181764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.181781 | orchestrator | 2025-09-19 11:49:04.181798 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-19 11:49:04.181814 | orchestrator | Friday 19 September 2025 11:43:52 +0000 (0:00:03.599) 0:03:31.318 ****** 2025-09-19 11:49:04.181831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.181848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.181906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.181919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.181930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.181940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.181950 | orchestrator | 2025-09-19 11:49:04.181960 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-19 11:49:04.181970 | orchestrator | Friday 19 September 2025 11:44:00 +0000 (0:00:08.053) 0:03:39.371 ****** 2025-09-19 11:49:04.182009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:49:04.182055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.182066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:49:04.182077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.182087 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.182097 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.182107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 11:49:04.182159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.182171 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.182181 | orchestrator | 2025-09-19 11:49:04.182191 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-19 11:49:04.182201 | orchestrator | Friday 19 September 2025 11:44:01 +0000 (0:00:00.771) 0:03:40.143 ****** 2025-09-19 11:49:04.182210 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.182220 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:49:04.182229 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:49:04.182239 | orchestrator | 2025-09-19 11:49:04.182248 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-19 11:49:04.182258 | orchestrator | Friday 19 September 2025 11:44:04 +0000 (0:00:02.590) 0:03:42.733 ****** 2025-09-19 11:49:04.182267 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.182276 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.182284 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.182291 | orchestrator | 2025-09-19 11:49:04.182299 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-19 11:49:04.182307 | orchestrator | Friday 19 September 2025 11:44:05 +0000 (0:00:01.198) 0:03:43.932 ****** 2025-09-19 11:49:04.182315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.182324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.182364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 11:49:04.182375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.182384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.182392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.182404 | orchestrator | 2025-09-19 11:49:04.182412 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 11:49:04.182420 | orchestrator | Friday 19 September 2025 11:44:07 +0000 (0:00:02.194) 0:03:46.126 ****** 2025-09-19 11:49:04.182428 | orchestrator | 2025-09-19 11:49:04.182436 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 11:49:04.182444 | orchestrator | Friday 19 September 2025 11:44:07 +0000 (0:00:00.133) 0:03:46.259 ****** 2025-09-19 11:49:04.182451 | orchestrator | 2025-09-19 11:49:04.182459 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 11:49:04.182467 | orchestrator | Friday 19 September 2025 11:44:07 +0000 (0:00:00.098) 0:03:46.357 ****** 2025-09-19 11:49:04.182475 | orchestrator | 2025-09-19 11:49:04.182482 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-19 11:49:04.182490 | orchestrator | Friday 19 September 2025 11:44:08 +0000 (0:00:00.131) 0:03:46.489 ****** 2025-09-19 11:49:04.182498 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.182506 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:49:04.182513 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:49:04.182521 | orchestrator | 2025-09-19 11:49:04.182528 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-19 11:49:04.182536 | orchestrator | Friday 19 September 2025 11:44:25 +0000 (0:00:17.680) 0:04:04.169 ****** 2025-09-19 11:49:04.182544 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.182551 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:49:04.182559 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:49:04.182567 | orchestrator | 2025-09-19 11:49:04.182575 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-19 11:49:04.182582 | orchestrator | 2025-09-19 11:49:04.182590 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 11:49:04.182598 | orchestrator | Friday 19 September 2025 11:44:33 +0000 (0:00:07.442) 0:04:11.613 ****** 2025-09-19 11:49:04.182609 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:49:04.182618 | orchestrator | 2025-09-19 11:49:04.182648 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 11:49:04.182658 | orchestrator | Friday 19 September 2025 11:44:34 +0000 (0:00:01.639) 0:04:13.253 ****** 2025-09-19 11:49:04.182666 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.182673 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.182681 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.182733 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.182741 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.182749 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.182760 | orchestrator | 2025-09-19 11:49:04.182775 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-19 11:49:04.182788 | orchestrator | Friday 19 September 2025 11:44:36 +0000 (0:00:01.703) 0:04:14.956 ****** 2025-09-19 11:49:04.182801 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.182814 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.182829 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.182843 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:49:04.182854 | orchestrator | 2025-09-19 11:49:04.182862 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 11:49:04.182869 | orchestrator | Friday 19 September 2025 11:44:37 +0000 (0:00:01.142) 0:04:16.099 ****** 2025-09-19 11:49:04.182877 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-19 11:49:04.182885 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-19 11:49:04.182893 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-19 11:49:04.182901 | orchestrator | 2025-09-19 11:49:04.182916 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 11:49:04.182923 | orchestrator | Friday 19 September 2025 11:44:39 +0000 (0:00:01.465) 0:04:17.564 ****** 2025-09-19 11:49:04.182931 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-19 11:49:04.182939 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-19 11:49:04.182947 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-19 11:49:04.182954 | orchestrator | 2025-09-19 11:49:04.182962 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 11:49:04.182970 | orchestrator | Friday 19 September 2025 11:44:40 +0000 (0:00:01.452) 0:04:19.016 ****** 2025-09-19 11:49:04.182977 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-19 11:49:04.182985 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.182993 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-19 11:49:04.183001 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.183008 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-19 11:49:04.183016 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.183024 | orchestrator | 2025-09-19 11:49:04.183032 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-19 11:49:04.183039 | orchestrator | Friday 19 September 2025 11:44:41 +0000 (0:00:00.831) 0:04:19.848 ****** 2025-09-19 11:49:04.183047 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:49:04.183055 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 11:49:04.183062 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 11:49:04.183070 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:49:04.183078 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.183086 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:49:04.183093 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:49:04.183101 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.183108 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 11:49:04.183116 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 11:49:04.183124 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.183132 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 11:49:04.183139 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 11:49:04.183147 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 11:49:04.183154 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 11:49:04.183162 | orchestrator | 2025-09-19 11:49:04.183170 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-19 11:49:04.183177 | orchestrator | Friday 19 September 2025 11:44:43 +0000 (0:00:02.426) 0:04:22.275 ****** 2025-09-19 11:49:04.183185 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.183191 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.183198 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.183204 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:49:04.183211 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:49:04.183218 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:49:04.183224 | orchestrator | 2025-09-19 11:49:04.183231 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-19 11:49:04.183237 | orchestrator | Friday 19 September 2025 11:44:45 +0000 (0:00:01.223) 0:04:23.498 ****** 2025-09-19 11:49:04.183244 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.183250 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.183257 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.183267 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:49:04.183274 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:49:04.183280 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:49:04.183287 | orchestrator | 2025-09-19 11:49:04.183294 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-19 11:49:04.183304 | orchestrator | Friday 19 September 2025 11:44:47 +0000 (0:00:02.504) 0:04:26.003 ****** 2025-09-19 11:49:04.183334 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183343 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183358 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183366 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183427 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183474 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183494 | orchestrator | 2025-09-19 11:49:04.183501 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 11:49:04.183508 | orchestrator | Friday 19 September 2025 11:44:51 +0000 (0:00:03.927) 0:04:29.931 ****** 2025-09-19 11:49:04.183515 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:49:04.183522 | orchestrator | 2025-09-19 11:49:04.183528 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-19 11:49:04.183535 | orchestrator | Friday 19 September 2025 11:44:52 +0000 (0:00:01.004) 0:04:30.935 ****** 2025-09-19 11:49:04.183542 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183577 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183600 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183647 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183663 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183676 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183734 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.183749 | orchestrator | 2025-09-19 11:49:04.183759 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-19 11:49:04.183772 | orchestrator | Friday 19 September 2025 11:44:56 +0000 (0:00:04.345) 0:04:35.280 ****** 2025-09-19 11:49:04.183784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:49:04.183796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:49:04.183816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.183830 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.183865 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:49:04.183873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:49:04.183881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.183887 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.183894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:49:04.183906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:49:04.183913 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.183923 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.183948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:49:04.183957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.183963 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.183970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:49:04.183977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.183988 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.183995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:49:04.184002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.184009 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.184016 | orchestrator | 2025-09-19 11:49:04.184022 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-19 11:49:04.184029 | orchestrator | Friday 19 September 2025 11:44:59 +0000 (0:00:02.352) 0:04:37.633 ****** 2025-09-19 11:49:04.184057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:49:04.184065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:49:04.184072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:49:04.184084 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.184091 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.184098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:49:04.184124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.184133 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.184140 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:49:04.184147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:49:04.184160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.184167 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.184174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:49:04.184181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.184188 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.184215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:49:04.184224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.184230 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.184237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:49:04.184248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.184255 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.184262 | orchestrator | 2025-09-19 11:49:04.184269 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 11:49:04.184275 | orchestrator | Friday 19 September 2025 11:45:01 +0000 (0:00:02.483) 0:04:40.117 ****** 2025-09-19 11:49:04.184282 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.184288 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.184295 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.184302 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 11:49:04.184308 | orchestrator | 2025-09-19 11:49:04.184315 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-19 11:49:04.184322 | orchestrator | Friday 19 September 2025 11:45:02 +0000 (0:00:00.950) 0:04:41.067 ****** 2025-09-19 11:49:04.184328 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 11:49:04.184335 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 11:49:04.184341 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 11:49:04.184348 | orchestrator | 2025-09-19 11:49:04.184354 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-19 11:49:04.184361 | orchestrator | Friday 19 September 2025 11:45:03 +0000 (0:00:01.167) 0:04:42.235 ****** 2025-09-19 11:49:04.184368 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 11:49:04.184374 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 11:49:04.184381 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 11:49:04.184387 | orchestrator | 2025-09-19 11:49:04.184394 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-19 11:49:04.184401 | orchestrator | Friday 19 September 2025 11:45:05 +0000 (0:00:01.614) 0:04:43.849 ****** 2025-09-19 11:49:04.184407 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:49:04.184414 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:49:04.184421 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:49:04.184427 | orchestrator | 2025-09-19 11:49:04.184434 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-19 11:49:04.184440 | orchestrator | Friday 19 September 2025 11:45:06 +0000 (0:00:00.627) 0:04:44.477 ****** 2025-09-19 11:49:04.184447 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:49:04.184453 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:49:04.184460 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:49:04.184467 | orchestrator | 2025-09-19 11:49:04.184473 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-19 11:49:04.184485 | orchestrator | Friday 19 September 2025 11:45:06 +0000 (0:00:00.514) 0:04:44.992 ****** 2025-09-19 11:49:04.184492 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 11:49:04.184515 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 11:49:04.184527 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 11:49:04.184534 | orchestrator | 2025-09-19 11:49:04.184541 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-19 11:49:04.184547 | orchestrator | Friday 19 September 2025 11:45:07 +0000 (0:00:01.050) 0:04:46.042 ****** 2025-09-19 11:49:04.184554 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 11:49:04.184561 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 11:49:04.184567 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 11:49:04.184574 | orchestrator | 2025-09-19 11:49:04.184581 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-19 11:49:04.184587 | orchestrator | Friday 19 September 2025 11:45:08 +0000 (0:00:01.286) 0:04:47.329 ****** 2025-09-19 11:49:04.184594 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 11:49:04.184600 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 11:49:04.184607 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 11:49:04.184613 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-19 11:49:04.184620 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-19 11:49:04.184626 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-19 11:49:04.184633 | orchestrator | 2025-09-19 11:49:04.184639 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-19 11:49:04.184646 | orchestrator | Friday 19 September 2025 11:45:12 +0000 (0:00:03.918) 0:04:51.247 ****** 2025-09-19 11:49:04.184652 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.184659 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.184665 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.184672 | orchestrator | 2025-09-19 11:49:04.184678 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-19 11:49:04.184722 | orchestrator | Friday 19 September 2025 11:45:13 +0000 (0:00:00.274) 0:04:51.521 ****** 2025-09-19 11:49:04.184729 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.184736 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.184743 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.184749 | orchestrator | 2025-09-19 11:49:04.184756 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-19 11:49:04.184763 | orchestrator | Friday 19 September 2025 11:45:13 +0000 (0:00:00.279) 0:04:51.801 ****** 2025-09-19 11:49:04.184769 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:49:04.184776 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:49:04.184782 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:49:04.184789 | orchestrator | 2025-09-19 11:49:04.184796 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-19 11:49:04.184802 | orchestrator | Friday 19 September 2025 11:45:14 +0000 (0:00:01.594) 0:04:53.395 ****** 2025-09-19 11:49:04.184809 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 11:49:04.184816 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 11:49:04.184823 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 11:49:04.184829 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 11:49:04.184836 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 11:49:04.184843 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 11:49:04.184849 | orchestrator | 2025-09-19 11:49:04.184856 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-19 11:49:04.184867 | orchestrator | Friday 19 September 2025 11:45:18 +0000 (0:00:03.712) 0:04:57.107 ****** 2025-09-19 11:49:04.184874 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:49:04.184880 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:49:04.184886 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:49:04.184892 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 11:49:04.184899 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:49:04.184905 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 11:49:04.184911 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:49:04.184917 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 11:49:04.184923 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:49:04.184929 | orchestrator | 2025-09-19 11:49:04.184935 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-19 11:49:04.184942 | orchestrator | Friday 19 September 2025 11:45:21 +0000 (0:00:03.147) 0:05:00.255 ****** 2025-09-19 11:49:04.184948 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.184954 | orchestrator | 2025-09-19 11:49:04.184960 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-19 11:49:04.184966 | orchestrator | Friday 19 September 2025 11:45:21 +0000 (0:00:00.123) 0:05:00.378 ****** 2025-09-19 11:49:04.184972 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.184979 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.184985 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.184991 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.184997 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.185003 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.185010 | orchestrator | 2025-09-19 11:49:04.185019 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-19 11:49:04.185045 | orchestrator | Friday 19 September 2025 11:45:22 +0000 (0:00:00.683) 0:05:01.062 ****** 2025-09-19 11:49:04.185052 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 11:49:04.185058 | orchestrator | 2025-09-19 11:49:04.185065 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-19 11:49:04.185071 | orchestrator | Friday 19 September 2025 11:45:23 +0000 (0:00:00.649) 0:05:01.711 ****** 2025-09-19 11:49:04.185077 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.185083 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.185089 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.185100 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.185111 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.185121 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.185131 | orchestrator | 2025-09-19 11:49:04.185140 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-19 11:49:04.185150 | orchestrator | Friday 19 September 2025 11:45:23 +0000 (0:00:00.518) 0:05:02.230 ****** 2025-09-19 11:49:04.185160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185319 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185331 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185343 | orchestrator | 2025-09-19 11:49:04.185350 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-19 11:49:04.185356 | orchestrator | Friday 19 September 2025 11:45:27 +0000 (0:00:04.131) 0:05:06.361 ****** 2025-09-19 11:49:04.185365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:49:04.185375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:49:04.185382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:49:04.185392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:49:04.185399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:49:04.185405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:49:04.185418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185432 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185449 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.185492 | orchestrator | 2025-09-19 11:49:04.185498 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-19 11:49:04.185504 | orchestrator | Friday 19 September 2025 11:45:33 +0000 (0:00:05.667) 0:05:12.028 ****** 2025-09-19 11:49:04.185510 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.185517 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.185523 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.185529 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.185535 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.185541 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.185547 | orchestrator | 2025-09-19 11:49:04.185553 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-19 11:49:04.185559 | orchestrator | Friday 19 September 2025 11:45:34 +0000 (0:00:01.296) 0:05:13.325 ****** 2025-09-19 11:49:04.185565 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 11:49:04.185571 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 11:49:04.185577 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 11:49:04.185583 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 11:49:04.185589 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 11:49:04.185595 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.185601 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 11:49:04.185607 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 11:49:04.185613 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.185619 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 11:49:04.185625 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 11:49:04.185632 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.185638 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 11:49:04.185644 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 11:49:04.185650 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 11:49:04.185656 | orchestrator | 2025-09-19 11:49:04.185662 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-19 11:49:04.185668 | orchestrator | Friday 19 September 2025 11:45:38 +0000 (0:00:03.866) 0:05:17.191 ****** 2025-09-19 11:49:04.185674 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.185680 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.185700 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.185707 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.185713 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.185719 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.185725 | orchestrator | 2025-09-19 11:49:04.185731 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-19 11:49:04.185737 | orchestrator | Friday 19 September 2025 11:45:39 +0000 (0:00:00.685) 0:05:17.877 ****** 2025-09-19 11:49:04.185749 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 11:49:04.185759 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 11:49:04.185768 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 11:49:04.185775 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 11:49:04.185781 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 11:49:04.185787 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 11:49:04.185793 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 11:49:04.185800 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 11:49:04.185806 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.185812 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 11:49:04.185818 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 11:49:04.185824 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 11:49:04.185830 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 11:49:04.185837 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.185843 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 11:49:04.185849 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.185855 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 11:49:04.185861 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 11:49:04.185867 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 11:49:04.185873 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 11:49:04.185880 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 11:49:04.185886 | orchestrator | 2025-09-19 11:49:04.185892 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-19 11:49:04.185898 | orchestrator | Friday 19 September 2025 11:45:46 +0000 (0:00:07.232) 0:05:25.110 ****** 2025-09-19 11:49:04.185904 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 11:49:04.185911 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 11:49:04.185917 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 11:49:04.185923 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 11:49:04.185929 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 11:49:04.185935 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 11:49:04.185942 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 11:49:04.185948 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 11:49:04.185958 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 11:49:04.185964 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 11:49:04.185970 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 11:49:04.185976 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 11:49:04.185982 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 11:49:04.185989 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 11:49:04.185995 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.186001 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.186007 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 11:49:04.186013 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.186037 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 11:49:04.186043 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 11:49:04.186050 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 11:49:04.186056 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 11:49:04.186065 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 11:49:04.186075 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 11:49:04.186081 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 11:49:04.186088 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 11:49:04.186094 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 11:49:04.186100 | orchestrator | 2025-09-19 11:49:04.186106 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-19 11:49:04.186113 | orchestrator | Friday 19 September 2025 11:45:53 +0000 (0:00:07.199) 0:05:32.310 ****** 2025-09-19 11:49:04.186119 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.186125 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.186131 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.186138 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.186144 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.186150 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.186156 | orchestrator | 2025-09-19 11:49:04.186162 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-19 11:49:04.186169 | orchestrator | Friday 19 September 2025 11:45:54 +0000 (0:00:00.593) 0:05:32.904 ****** 2025-09-19 11:49:04.186175 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.186181 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.186187 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.186193 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.186199 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.186205 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.186212 | orchestrator | 2025-09-19 11:49:04.186218 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-19 11:49:04.186224 | orchestrator | Friday 19 September 2025 11:45:55 +0000 (0:00:00.785) 0:05:33.689 ****** 2025-09-19 11:49:04.186230 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.186236 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:49:04.186243 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:49:04.186249 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.186255 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.186261 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:49:04.186270 | orchestrator | 2025-09-19 11:49:04.186277 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-19 11:49:04.186283 | orchestrator | Friday 19 September 2025 11:45:57 +0000 (0:00:02.181) 0:05:35.870 ****** 2025-09-19 11:49:04.186289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:49:04.186296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:49:04.186303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.186309 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.186322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:49:04.186329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:49:04.186340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.186346 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.186353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 11:49:04.186360 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 11:49:04.186373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.186380 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.186386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:49:04.186397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.186403 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.186410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:49:04.186416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.186423 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.186429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 11:49:04.186441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 11:49:04.186448 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.186455 | orchestrator | 2025-09-19 11:49:04.186461 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-19 11:49:04.186467 | orchestrator | Friday 19 September 2025 11:46:00 +0000 (0:00:03.048) 0:05:38.918 ****** 2025-09-19 11:49:04.186474 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-19 11:49:04.186480 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-19 11:49:04.186486 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.186492 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-19 11:49:04.186502 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-19 11:49:04.186508 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.186514 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-19 11:49:04.186521 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-19 11:49:04.186527 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.186533 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-19 11:49:04.186539 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-19 11:49:04.186545 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.186551 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-19 11:49:04.186557 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-19 11:49:04.186563 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.186570 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-19 11:49:04.186576 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-19 11:49:04.186582 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.186588 | orchestrator | 2025-09-19 11:49:04.186594 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-19 11:49:04.186600 | orchestrator | Friday 19 September 2025 11:46:01 +0000 (0:00:00.602) 0:05:39.521 ****** 2025-09-19 11:49:04.186607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186614 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186626 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186643 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186703 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186710 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 11:49:04.186741 | orchestrator | 2025-09-19 11:49:04.186747 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 11:49:04.186754 | orchestrator | Friday 19 September 2025 11:46:04 +0000 (0:00:03.119) 0:05:42.641 ****** 2025-09-19 11:49:04.186760 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.186769 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.186775 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.186785 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.186791 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.186797 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.186804 | orchestrator | 2025-09-19 11:49:04.186810 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 11:49:04.186816 | orchestrator | Friday 19 September 2025 11:46:04 +0000 (0:00:00.588) 0:05:43.229 ****** 2025-09-19 11:49:04.186822 | orchestrator | 2025-09-19 11:49:04.186829 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 11:49:04.186835 | orchestrator | Friday 19 September 2025 11:46:04 +0000 (0:00:00.120) 0:05:43.350 ****** 2025-09-19 11:49:04.186841 | orchestrator | 2025-09-19 11:49:04.186847 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 11:49:04.186853 | orchestrator | Friday 19 September 2025 11:46:05 +0000 (0:00:00.225) 0:05:43.576 ****** 2025-09-19 11:49:04.186859 | orchestrator | 2025-09-19 11:49:04.186866 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 11:49:04.186872 | orchestrator | Friday 19 September 2025 11:46:05 +0000 (0:00:00.135) 0:05:43.711 ****** 2025-09-19 11:49:04.186878 | orchestrator | 2025-09-19 11:49:04.186885 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 11:49:04.186891 | orchestrator | Friday 19 September 2025 11:46:05 +0000 (0:00:00.150) 0:05:43.862 ****** 2025-09-19 11:49:04.186897 | orchestrator | 2025-09-19 11:49:04.186903 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 11:49:04.186909 | orchestrator | Friday 19 September 2025 11:46:05 +0000 (0:00:00.152) 0:05:44.014 ****** 2025-09-19 11:49:04.186916 | orchestrator | 2025-09-19 11:49:04.186922 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-19 11:49:04.186928 | orchestrator | Friday 19 September 2025 11:46:05 +0000 (0:00:00.118) 0:05:44.133 ****** 2025-09-19 11:49:04.186934 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.186940 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:49:04.186947 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:49:04.186953 | orchestrator | 2025-09-19 11:49:04.186959 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-19 11:49:04.186965 | orchestrator | Friday 19 September 2025 11:46:19 +0000 (0:00:13.798) 0:05:57.931 ****** 2025-09-19 11:49:04.186972 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.186978 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:49:04.186984 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:49:04.186990 | orchestrator | 2025-09-19 11:49:04.186996 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-19 11:49:04.187003 | orchestrator | Friday 19 September 2025 11:46:32 +0000 (0:00:12.801) 0:06:10.733 ****** 2025-09-19 11:49:04.187009 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:49:04.187015 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:49:04.187021 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:49:04.187028 | orchestrator | 2025-09-19 11:49:04.187034 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-19 11:49:04.187040 | orchestrator | Friday 19 September 2025 11:46:54 +0000 (0:00:22.332) 0:06:33.065 ****** 2025-09-19 11:49:04.187046 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:49:04.187052 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:49:04.187059 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:49:04.187065 | orchestrator | 2025-09-19 11:49:04.187071 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-19 11:49:04.187081 | orchestrator | Friday 19 September 2025 11:47:28 +0000 (0:00:33.552) 0:07:06.618 ****** 2025-09-19 11:49:04.187087 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:49:04.187093 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:49:04.187099 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:49:04.187106 | orchestrator | 2025-09-19 11:49:04.187112 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-19 11:49:04.187118 | orchestrator | Friday 19 September 2025 11:47:29 +0000 (0:00:00.924) 0:07:07.542 ****** 2025-09-19 11:49:04.187124 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:49:04.187130 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:49:04.187137 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:49:04.187143 | orchestrator | 2025-09-19 11:49:04.187149 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-19 11:49:04.187155 | orchestrator | Friday 19 September 2025 11:47:30 +0000 (0:00:01.013) 0:07:08.556 ****** 2025-09-19 11:49:04.187162 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:49:04.187168 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:49:04.187174 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:49:04.187180 | orchestrator | 2025-09-19 11:49:04.187186 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-19 11:49:04.187192 | orchestrator | Friday 19 September 2025 11:47:54 +0000 (0:00:24.679) 0:07:33.235 ****** 2025-09-19 11:49:04.187199 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.187205 | orchestrator | 2025-09-19 11:49:04.187211 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-19 11:49:04.187217 | orchestrator | Friday 19 September 2025 11:47:54 +0000 (0:00:00.138) 0:07:33.374 ****** 2025-09-19 11:49:04.187224 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.187230 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.187236 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.187242 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.187248 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.187255 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-19 11:49:04.187261 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:49:04.187267 | orchestrator | 2025-09-19 11:49:04.187273 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-19 11:49:04.187280 | orchestrator | Friday 19 September 2025 11:48:16 +0000 (0:00:22.048) 0:07:55.422 ****** 2025-09-19 11:49:04.187286 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.187292 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.187298 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.187304 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.187313 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.187320 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.187326 | orchestrator | 2025-09-19 11:49:04.187332 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-19 11:49:04.187338 | orchestrator | Friday 19 September 2025 11:48:24 +0000 (0:00:07.612) 0:08:03.034 ****** 2025-09-19 11:49:04.187345 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.187372 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.187379 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.187385 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.187391 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.187397 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-09-19 11:49:04.187404 | orchestrator | 2025-09-19 11:49:04.187410 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 11:49:04.187416 | orchestrator | Friday 19 September 2025 11:48:28 +0000 (0:00:03.991) 0:08:07.026 ****** 2025-09-19 11:49:04.187422 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:49:04.187432 | orchestrator | 2025-09-19 11:49:04.187438 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 11:49:04.187444 | orchestrator | Friday 19 September 2025 11:48:41 +0000 (0:00:12.976) 0:08:20.002 ****** 2025-09-19 11:49:04.187450 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:49:04.187456 | orchestrator | 2025-09-19 11:49:04.187463 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-19 11:49:04.187469 | orchestrator | Friday 19 September 2025 11:48:42 +0000 (0:00:01.221) 0:08:21.223 ****** 2025-09-19 11:49:04.187475 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.187481 | orchestrator | 2025-09-19 11:49:04.187487 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-19 11:49:04.187494 | orchestrator | Friday 19 September 2025 11:48:43 +0000 (0:00:01.144) 0:08:22.368 ****** 2025-09-19 11:49:04.187500 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:49:04.187506 | orchestrator | 2025-09-19 11:49:04.187512 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-19 11:49:04.187518 | orchestrator | Friday 19 September 2025 11:48:55 +0000 (0:00:11.216) 0:08:33.584 ****** 2025-09-19 11:49:04.187525 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:49:04.187531 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:49:04.187537 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:49:04.187543 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:49:04.187549 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:49:04.187555 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:49:04.187561 | orchestrator | 2025-09-19 11:49:04.187568 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-19 11:49:04.187574 | orchestrator | 2025-09-19 11:49:04.187580 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-19 11:49:04.187586 | orchestrator | Friday 19 September 2025 11:48:56 +0000 (0:00:01.740) 0:08:35.325 ****** 2025-09-19 11:49:04.187592 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:49:04.187599 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:49:04.187605 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:49:04.187611 | orchestrator | 2025-09-19 11:49:04.187617 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-19 11:49:04.187623 | orchestrator | 2025-09-19 11:49:04.187629 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-19 11:49:04.187636 | orchestrator | Friday 19 September 2025 11:48:57 +0000 (0:00:00.911) 0:08:36.237 ****** 2025-09-19 11:49:04.187642 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.187648 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.187654 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.187660 | orchestrator | 2025-09-19 11:49:04.187666 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-19 11:49:04.187673 | orchestrator | 2025-09-19 11:49:04.187679 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-19 11:49:04.187715 | orchestrator | Friday 19 September 2025 11:48:58 +0000 (0:00:00.561) 0:08:36.798 ****** 2025-09-19 11:49:04.187721 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-19 11:49:04.187728 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-19 11:49:04.187734 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-19 11:49:04.187740 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-19 11:49:04.187746 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-19 11:49:04.187752 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-19 11:49:04.187759 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-19 11:49:04.187765 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-19 11:49:04.187771 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-19 11:49:04.187781 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-19 11:49:04.187788 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-19 11:49:04.187794 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-19 11:49:04.187800 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:49:04.187806 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-19 11:49:04.187812 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-19 11:49:04.187819 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-19 11:49:04.187825 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-19 11:49:04.187831 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-19 11:49:04.187837 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-19 11:49:04.187843 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:49:04.187853 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-19 11:49:04.187862 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-19 11:49:04.187869 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-19 11:49:04.187875 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-19 11:49:04.187881 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-19 11:49:04.187888 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-19 11:49:04.187894 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:49:04.187900 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-19 11:49:04.187908 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-19 11:49:04.187918 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-19 11:49:04.187930 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-19 11:49:04.187941 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-19 11:49:04.187951 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-19 11:49:04.187962 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.187972 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.187982 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-19 11:49:04.187992 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-19 11:49:04.188001 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-19 11:49:04.188012 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-19 11:49:04.188022 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-19 11:49:04.188032 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-19 11:49:04.188043 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.188053 | orchestrator | 2025-09-19 11:49:04.188065 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-19 11:49:04.188075 | orchestrator | 2025-09-19 11:49:04.188086 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-19 11:49:04.188097 | orchestrator | Friday 19 September 2025 11:48:59 +0000 (0:00:01.117) 0:08:37.916 ****** 2025-09-19 11:49:04.188107 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-19 11:49:04.188117 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-19 11:49:04.188124 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.188130 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-19 11:49:04.188136 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-19 11:49:04.188142 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.188148 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-19 11:49:04.188154 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-19 11:49:04.188160 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.188171 | orchestrator | 2025-09-19 11:49:04.188177 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-19 11:49:04.188184 | orchestrator | 2025-09-19 11:49:04.188190 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-19 11:49:04.188196 | orchestrator | Friday 19 September 2025 11:48:59 +0000 (0:00:00.467) 0:08:38.383 ****** 2025-09-19 11:49:04.188202 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.188208 | orchestrator | 2025-09-19 11:49:04.188214 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-19 11:49:04.188220 | orchestrator | 2025-09-19 11:49:04.188226 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-19 11:49:04.188232 | orchestrator | Friday 19 September 2025 11:49:00 +0000 (0:00:00.719) 0:08:39.103 ****** 2025-09-19 11:49:04.188238 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:49:04.188244 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:49:04.188249 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:49:04.188254 | orchestrator | 2025-09-19 11:49:04.188259 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:49:04.188265 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:49:04.188270 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-19 11:49:04.188276 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-19 11:49:04.188281 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-19 11:49:04.188286 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 11:49:04.188292 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-19 11:49:04.188297 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-19 11:49:04.188302 | orchestrator | 2025-09-19 11:49:04.188308 | orchestrator | 2025-09-19 11:49:04.188313 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:49:04.188318 | orchestrator | Friday 19 September 2025 11:49:01 +0000 (0:00:00.364) 0:08:39.467 ****** 2025-09-19 11:49:04.188327 | orchestrator | =============================================================================== 2025-09-19 11:49:04.188336 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 33.55s 2025-09-19 11:49:04.188342 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.91s 2025-09-19 11:49:04.188347 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.68s 2025-09-19 11:49:04.188353 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.33s 2025-09-19 11:49:04.188358 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.05s 2025-09-19 11:49:04.188363 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.28s 2025-09-19 11:49:04.188368 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.99s 2025-09-19 11:49:04.188374 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 17.68s 2025-09-19 11:49:04.188379 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.03s 2025-09-19 11:49:04.188384 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.80s 2025-09-19 11:49:04.188390 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.00s 2025-09-19 11:49:04.188398 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.98s 2025-09-19 11:49:04.188403 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.80s 2025-09-19 11:49:04.188409 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.14s 2025-09-19 11:49:04.188414 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.66s 2025-09-19 11:49:04.188419 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.22s 2025-09-19 11:49:04.188424 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.91s 2025-09-19 11:49:04.188429 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.48s 2025-09-19 11:49:04.188435 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.05s 2025-09-19 11:49:04.188440 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.61s 2025-09-19 11:49:04.188445 | orchestrator | 2025-09-19 11:49:04 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:04.188451 | orchestrator | 2025-09-19 11:49:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:07.221161 | orchestrator | 2025-09-19 11:49:07 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:07.221659 | orchestrator | 2025-09-19 11:49:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:10.259501 | orchestrator | 2025-09-19 11:49:10 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:10.259583 | orchestrator | 2025-09-19 11:49:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:13.297048 | orchestrator | 2025-09-19 11:49:13 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:13.297154 | orchestrator | 2025-09-19 11:49:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:16.338411 | orchestrator | 2025-09-19 11:49:16 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:16.338507 | orchestrator | 2025-09-19 11:49:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:19.380390 | orchestrator | 2025-09-19 11:49:19 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:19.380477 | orchestrator | 2025-09-19 11:49:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:22.424063 | orchestrator | 2025-09-19 11:49:22 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:22.424127 | orchestrator | 2025-09-19 11:49:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:25.458827 | orchestrator | 2025-09-19 11:49:25 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:25.458935 | orchestrator | 2025-09-19 11:49:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:28.500348 | orchestrator | 2025-09-19 11:49:28 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:28.500469 | orchestrator | 2025-09-19 11:49:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:31.537725 | orchestrator | 2025-09-19 11:49:31 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:31.537833 | orchestrator | 2025-09-19 11:49:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:34.574999 | orchestrator | 2025-09-19 11:49:34 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:34.575075 | orchestrator | 2025-09-19 11:49:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:37.617029 | orchestrator | 2025-09-19 11:49:37 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:37.617168 | orchestrator | 2025-09-19 11:49:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:40.653851 | orchestrator | 2025-09-19 11:49:40 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:40.653932 | orchestrator | 2025-09-19 11:49:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:43.706135 | orchestrator | 2025-09-19 11:49:43 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:43.706218 | orchestrator | 2025-09-19 11:49:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:46.744989 | orchestrator | 2025-09-19 11:49:46 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:46.745107 | orchestrator | 2025-09-19 11:49:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:49.801189 | orchestrator | 2025-09-19 11:49:49 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:49.801284 | orchestrator | 2025-09-19 11:49:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:52.848513 | orchestrator | 2025-09-19 11:49:52 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:52.848599 | orchestrator | 2025-09-19 11:49:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:55.884003 | orchestrator | 2025-09-19 11:49:55 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:55.884100 | orchestrator | 2025-09-19 11:49:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:49:58.921936 | orchestrator | 2025-09-19 11:49:58 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:49:58.922091 | orchestrator | 2025-09-19 11:49:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:01.962962 | orchestrator | 2025-09-19 11:50:01 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:01.963024 | orchestrator | 2025-09-19 11:50:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:05.001183 | orchestrator | 2025-09-19 11:50:05 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:05.001273 | orchestrator | 2025-09-19 11:50:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:08.058513 | orchestrator | 2025-09-19 11:50:08 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:08.058600 | orchestrator | 2025-09-19 11:50:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:11.103942 | orchestrator | 2025-09-19 11:50:11 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:11.104031 | orchestrator | 2025-09-19 11:50:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:14.141036 | orchestrator | 2025-09-19 11:50:14 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:14.141270 | orchestrator | 2025-09-19 11:50:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:17.178253 | orchestrator | 2025-09-19 11:50:17 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:17.178328 | orchestrator | 2025-09-19 11:50:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:20.220056 | orchestrator | 2025-09-19 11:50:20 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:20.220142 | orchestrator | 2025-09-19 11:50:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:23.256696 | orchestrator | 2025-09-19 11:50:23 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:23.256807 | orchestrator | 2025-09-19 11:50:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:26.298095 | orchestrator | 2025-09-19 11:50:26 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:26.298196 | orchestrator | 2025-09-19 11:50:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:29.331217 | orchestrator | 2025-09-19 11:50:29 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:29.331300 | orchestrator | 2025-09-19 11:50:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:32.366919 | orchestrator | 2025-09-19 11:50:32 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:32.366991 | orchestrator | 2025-09-19 11:50:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:35.412343 | orchestrator | 2025-09-19 11:50:35 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:35.412467 | orchestrator | 2025-09-19 11:50:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:38.456482 | orchestrator | 2025-09-19 11:50:38 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:38.456672 | orchestrator | 2025-09-19 11:50:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:41.490004 | orchestrator | 2025-09-19 11:50:41 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:41.490111 | orchestrator | 2025-09-19 11:50:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:44.536150 | orchestrator | 2025-09-19 11:50:44 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:44.536252 | orchestrator | 2025-09-19 11:50:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:47.573622 | orchestrator | 2025-09-19 11:50:47 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:47.573705 | orchestrator | 2025-09-19 11:50:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:50.620437 | orchestrator | 2025-09-19 11:50:50 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:50.620538 | orchestrator | 2025-09-19 11:50:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:53.663073 | orchestrator | 2025-09-19 11:50:53 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:53.663151 | orchestrator | 2025-09-19 11:50:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:56.709332 | orchestrator | 2025-09-19 11:50:56 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:56.709433 | orchestrator | 2025-09-19 11:50:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:50:59.754009 | orchestrator | 2025-09-19 11:50:59 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:50:59.754157 | orchestrator | 2025-09-19 11:50:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:02.803804 | orchestrator | 2025-09-19 11:51:02 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:02.803893 | orchestrator | 2025-09-19 11:51:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:05.852103 | orchestrator | 2025-09-19 11:51:05 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:05.852198 | orchestrator | 2025-09-19 11:51:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:08.893034 | orchestrator | 2025-09-19 11:51:08 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:08.893130 | orchestrator | 2025-09-19 11:51:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:11.930814 | orchestrator | 2025-09-19 11:51:11 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:11.930877 | orchestrator | 2025-09-19 11:51:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:14.968385 | orchestrator | 2025-09-19 11:51:14 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:14.968488 | orchestrator | 2025-09-19 11:51:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:18.020612 | orchestrator | 2025-09-19 11:51:18 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:18.020698 | orchestrator | 2025-09-19 11:51:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:21.070835 | orchestrator | 2025-09-19 11:51:21 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:21.070934 | orchestrator | 2025-09-19 11:51:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:24.105691 | orchestrator | 2025-09-19 11:51:24 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:24.105788 | orchestrator | 2025-09-19 11:51:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:27.152404 | orchestrator | 2025-09-19 11:51:27 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:27.152501 | orchestrator | 2025-09-19 11:51:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:30.196887 | orchestrator | 2025-09-19 11:51:30 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:30.196969 | orchestrator | 2025-09-19 11:51:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:33.241943 | orchestrator | 2025-09-19 11:51:33 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:33.242097 | orchestrator | 2025-09-19 11:51:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:36.286441 | orchestrator | 2025-09-19 11:51:36 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:36.286569 | orchestrator | 2025-09-19 11:51:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:39.333002 | orchestrator | 2025-09-19 11:51:39 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:39.333101 | orchestrator | 2025-09-19 11:51:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:42.375635 | orchestrator | 2025-09-19 11:51:42 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state STARTED 2025-09-19 11:51:42.375755 | orchestrator | 2025-09-19 11:51:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 11:51:45.423176 | orchestrator | 2025-09-19 11:51:45 | INFO  | Task 049866ba-c363-4750-aa9b-b2351504060d is in state SUCCESS 2025-09-19 11:51:45.424729 | orchestrator | 2025-09-19 11:51:45.424798 | orchestrator | 2025-09-19 11:51:45.424892 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 11:51:45.424956 | orchestrator | 2025-09-19 11:51:45.424969 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 11:51:45.425269 | orchestrator | Friday 19 September 2025 11:47:04 +0000 (0:00:00.324) 0:00:00.324 ****** 2025-09-19 11:51:45.425284 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:45.425296 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:51:45.425307 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:51:45.425317 | orchestrator | 2025-09-19 11:51:45.425329 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 11:51:45.425366 | orchestrator | Friday 19 September 2025 11:47:05 +0000 (0:00:00.464) 0:00:00.788 ****** 2025-09-19 11:51:45.425378 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-19 11:51:45.425389 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-19 11:51:45.425400 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-19 11:51:45.425411 | orchestrator | 2025-09-19 11:51:45.425421 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-19 11:51:45.425432 | orchestrator | 2025-09-19 11:51:45.425444 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 11:51:45.425455 | orchestrator | Friday 19 September 2025 11:47:05 +0000 (0:00:00.472) 0:00:01.260 ****** 2025-09-19 11:51:45.425466 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:51:45.425477 | orchestrator | 2025-09-19 11:51:45.426124 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-19 11:51:45.426161 | orchestrator | Friday 19 September 2025 11:47:06 +0000 (0:00:00.590) 0:00:01.850 ****** 2025-09-19 11:51:45.426173 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-19 11:51:45.426184 | orchestrator | 2025-09-19 11:51:45.426195 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-19 11:51:45.426206 | orchestrator | Friday 19 September 2025 11:47:10 +0000 (0:00:03.979) 0:00:05.830 ****** 2025-09-19 11:51:45.426216 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-19 11:51:45.426227 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-19 11:51:45.426238 | orchestrator | 2025-09-19 11:51:45.426249 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-19 11:51:45.426260 | orchestrator | Friday 19 September 2025 11:47:16 +0000 (0:00:06.673) 0:00:12.504 ****** 2025-09-19 11:51:45.426270 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 11:51:45.426281 | orchestrator | 2025-09-19 11:51:45.426292 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-19 11:51:45.426302 | orchestrator | Friday 19 September 2025 11:47:19 +0000 (0:00:02.937) 0:00:15.441 ****** 2025-09-19 11:51:45.426313 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 11:51:45.426324 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-19 11:51:45.426335 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-19 11:51:45.426345 | orchestrator | 2025-09-19 11:51:45.426356 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-19 11:51:45.426367 | orchestrator | Friday 19 September 2025 11:47:28 +0000 (0:00:08.263) 0:00:23.705 ****** 2025-09-19 11:51:45.426377 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 11:51:45.426388 | orchestrator | 2025-09-19 11:51:45.426399 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-19 11:51:45.426409 | orchestrator | Friday 19 September 2025 11:47:31 +0000 (0:00:03.446) 0:00:27.151 ****** 2025-09-19 11:51:45.426420 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-19 11:51:45.426430 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-19 11:51:45.426441 | orchestrator | 2025-09-19 11:51:45.426453 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-19 11:51:45.426464 | orchestrator | Friday 19 September 2025 11:47:39 +0000 (0:00:07.463) 0:00:34.615 ****** 2025-09-19 11:51:45.426474 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-19 11:51:45.426529 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-19 11:51:45.426542 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-19 11:51:45.426553 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-19 11:51:45.426607 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-19 11:51:45.426619 | orchestrator | 2025-09-19 11:51:45.426630 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 11:51:45.426641 | orchestrator | Friday 19 September 2025 11:47:56 +0000 (0:00:17.048) 0:00:51.663 ****** 2025-09-19 11:51:45.426654 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:51:45.426666 | orchestrator | 2025-09-19 11:51:45.426679 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-19 11:51:45.426692 | orchestrator | Friday 19 September 2025 11:47:57 +0000 (0:00:01.128) 0:00:52.791 ****** 2025-09-19 11:51:45.426705 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.426717 | orchestrator | 2025-09-19 11:51:45.426729 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-19 11:51:45.426742 | orchestrator | Friday 19 September 2025 11:48:02 +0000 (0:00:05.598) 0:00:58.389 ****** 2025-09-19 11:51:45.426754 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.426767 | orchestrator | 2025-09-19 11:51:45.426780 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-19 11:51:45.426845 | orchestrator | Friday 19 September 2025 11:48:07 +0000 (0:00:04.757) 0:01:03.147 ****** 2025-09-19 11:51:45.426859 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:45.426870 | orchestrator | 2025-09-19 11:51:45.426881 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-19 11:51:45.426892 | orchestrator | Friday 19 September 2025 11:48:11 +0000 (0:00:03.646) 0:01:06.793 ****** 2025-09-19 11:51:45.426902 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-19 11:51:45.426913 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-19 11:51:45.426924 | orchestrator | 2025-09-19 11:51:45.426935 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-19 11:51:45.426945 | orchestrator | Friday 19 September 2025 11:48:22 +0000 (0:00:11.041) 0:01:17.835 ****** 2025-09-19 11:51:45.426956 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-19 11:51:45.426967 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-19 11:51:45.426980 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-19 11:51:45.426992 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-19 11:51:45.427003 | orchestrator | 2025-09-19 11:51:45.427014 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-19 11:51:45.427025 | orchestrator | Friday 19 September 2025 11:48:38 +0000 (0:00:16.303) 0:01:34.138 ****** 2025-09-19 11:51:45.427036 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.427047 | orchestrator | 2025-09-19 11:51:45.427058 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-19 11:51:45.427068 | orchestrator | Friday 19 September 2025 11:48:43 +0000 (0:00:04.821) 0:01:38.960 ****** 2025-09-19 11:51:45.427079 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.427090 | orchestrator | 2025-09-19 11:51:45.427101 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-19 11:51:45.427112 | orchestrator | Friday 19 September 2025 11:48:49 +0000 (0:00:05.995) 0:01:44.955 ****** 2025-09-19 11:51:45.427122 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:45.427133 | orchestrator | 2025-09-19 11:51:45.427144 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-19 11:51:45.427155 | orchestrator | Friday 19 September 2025 11:48:49 +0000 (0:00:00.185) 0:01:45.141 ****** 2025-09-19 11:51:45.427165 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.427184 | orchestrator | 2025-09-19 11:51:45.427196 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 11:51:45.427206 | orchestrator | Friday 19 September 2025 11:48:54 +0000 (0:00:04.855) 0:01:49.996 ****** 2025-09-19 11:51:45.427217 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:51:45.427228 | orchestrator | 2025-09-19 11:51:45.427239 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-19 11:51:45.427250 | orchestrator | Friday 19 September 2025 11:48:55 +0000 (0:00:00.794) 0:01:50.790 ****** 2025-09-19 11:51:45.427261 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:45.427271 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.427282 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:45.427293 | orchestrator | 2025-09-19 11:51:45.427303 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-19 11:51:45.427314 | orchestrator | Friday 19 September 2025 11:49:01 +0000 (0:00:05.821) 0:01:56.612 ****** 2025-09-19 11:51:45.427325 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:45.427336 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:45.427347 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.427358 | orchestrator | 2025-09-19 11:51:45.427369 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-19 11:51:45.427379 | orchestrator | Friday 19 September 2025 11:49:05 +0000 (0:00:04.739) 0:02:01.352 ****** 2025-09-19 11:51:45.427390 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.427401 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:45.427411 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:45.427422 | orchestrator | 2025-09-19 11:51:45.427433 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-19 11:51:45.427444 | orchestrator | Friday 19 September 2025 11:49:06 +0000 (0:00:00.822) 0:02:02.174 ****** 2025-09-19 11:51:45.427455 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:45.427466 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:51:45.427476 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:51:45.427540 | orchestrator | 2025-09-19 11:51:45.427554 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-19 11:51:45.427565 | orchestrator | Friday 19 September 2025 11:49:09 +0000 (0:00:02.512) 0:02:04.687 ****** 2025-09-19 11:51:45.427575 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.427586 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:45.427597 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:45.427607 | orchestrator | 2025-09-19 11:51:45.427618 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-19 11:51:45.427629 | orchestrator | Friday 19 September 2025 11:49:10 +0000 (0:00:01.315) 0:02:06.003 ****** 2025-09-19 11:51:45.427640 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.427650 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:45.427661 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:45.427671 | orchestrator | 2025-09-19 11:51:45.427683 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-19 11:51:45.427693 | orchestrator | Friday 19 September 2025 11:49:11 +0000 (0:00:01.235) 0:02:07.238 ****** 2025-09-19 11:51:45.427704 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:45.427715 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.427726 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:45.427736 | orchestrator | 2025-09-19 11:51:45.427784 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-19 11:51:45.427797 | orchestrator | Friday 19 September 2025 11:49:13 +0000 (0:00:01.878) 0:02:09.117 ****** 2025-09-19 11:51:45.427808 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.427819 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:45.427829 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:45.427840 | orchestrator | 2025-09-19 11:51:45.427851 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-19 11:51:45.427870 | orchestrator | Friday 19 September 2025 11:49:15 +0000 (0:00:01.658) 0:02:10.776 ****** 2025-09-19 11:51:45.427881 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:45.427892 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:51:45.427902 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:51:45.427913 | orchestrator | 2025-09-19 11:51:45.427924 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-19 11:51:45.427935 | orchestrator | Friday 19 September 2025 11:49:15 +0000 (0:00:00.654) 0:02:11.430 ****** 2025-09-19 11:51:45.427945 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:51:45.427956 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:51:45.427967 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:45.427977 | orchestrator | 2025-09-19 11:51:45.427988 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 11:51:45.427999 | orchestrator | Friday 19 September 2025 11:49:18 +0000 (0:00:02.793) 0:02:14.224 ****** 2025-09-19 11:51:45.428010 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:51:45.428021 | orchestrator | 2025-09-19 11:51:45.428031 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-19 11:51:45.428042 | orchestrator | Friday 19 September 2025 11:49:19 +0000 (0:00:00.588) 0:02:14.813 ****** 2025-09-19 11:51:45.428053 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:45.428064 | orchestrator | 2025-09-19 11:51:45.428075 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-19 11:51:45.428086 | orchestrator | Friday 19 September 2025 11:49:22 +0000 (0:00:03.465) 0:02:18.278 ****** 2025-09-19 11:51:45.428096 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:45.428107 | orchestrator | 2025-09-19 11:51:45.428118 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-19 11:51:45.428128 | orchestrator | Friday 19 September 2025 11:49:26 +0000 (0:00:03.470) 0:02:21.749 ****** 2025-09-19 11:51:45.428139 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-19 11:51:45.428150 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-19 11:51:45.428161 | orchestrator | 2025-09-19 11:51:45.428172 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-19 11:51:45.428182 | orchestrator | Friday 19 September 2025 11:49:33 +0000 (0:00:07.222) 0:02:28.971 ****** 2025-09-19 11:51:45.428198 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:45.428209 | orchestrator | 2025-09-19 11:51:45.428220 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-19 11:51:45.428230 | orchestrator | Friday 19 September 2025 11:49:36 +0000 (0:00:03.564) 0:02:32.536 ****** 2025-09-19 11:51:45.428241 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:51:45.428252 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:51:45.428263 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:51:45.428274 | orchestrator | 2025-09-19 11:51:45.428284 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-19 11:51:45.428295 | orchestrator | Friday 19 September 2025 11:49:37 +0000 (0:00:00.347) 0:02:32.883 ****** 2025-09-19 11:51:45.428309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.428364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.428378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.428391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.428403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.428414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.428426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.428446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.428512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.428527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.428540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.428551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.428563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.428582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.428624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.428638 | orchestrator | 2025-09-19 11:51:45.428649 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-19 11:51:45.428660 | orchestrator | Friday 19 September 2025 11:49:39 +0000 (0:00:02.442) 0:02:35.326 ****** 2025-09-19 11:51:45.428671 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:45.428682 | orchestrator | 2025-09-19 11:51:45.428693 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-19 11:51:45.428704 | orchestrator | Friday 19 September 2025 11:49:39 +0000 (0:00:00.128) 0:02:35.455 ****** 2025-09-19 11:51:45.428714 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:45.428725 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:45.428735 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:45.428746 | orchestrator | 2025-09-19 11:51:45.428757 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-19 11:51:45.428768 | orchestrator | Friday 19 September 2025 11:49:40 +0000 (0:00:00.529) 0:02:35.984 ****** 2025-09-19 11:51:45.428779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:51:45.428791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:51:45.428802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.428820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.428832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:51:45.428843 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:45.428885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:51:45.428899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:51:45.428910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.428921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.428939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:51:45.428950 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:45.428991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:51:45.429005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:51:45.429017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:51:45.429057 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:45.429068 | orchestrator | 2025-09-19 11:51:45.429079 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 11:51:45.429090 | orchestrator | Friday 19 September 2025 11:49:41 +0000 (0:00:00.672) 0:02:36.657 ****** 2025-09-19 11:51:45.429100 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 11:51:45.429111 | orchestrator | 2025-09-19 11:51:45.429122 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-19 11:51:45.429132 | orchestrator | Friday 19 September 2025 11:49:41 +0000 (0:00:00.526) 0:02:37.184 ****** 2025-09-19 11:51:45.429158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.429202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.429216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.429234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.429246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.429263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.429274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.429292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.429304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.429315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.429341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.429352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.429368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.429390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.429402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.429413 | orchestrator | 2025-09-19 11:51:45.429423 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-19 11:51:45.429434 | orchestrator | Friday 19 September 2025 11:49:47 +0000 (0:00:05.675) 0:02:42.860 ****** 2025-09-19 11:51:45.429445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:51:45.429463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:51:45.429475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:51:45.429540 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:45.429552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:51:45.429569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:51:45.429581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:51:45.429619 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:45.429639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:51:45.429651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:51:45.429668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:51:45.429702 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:45.429712 | orchestrator | 2025-09-19 11:51:45.429723 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-19 11:51:45.429734 | orchestrator | Friday 19 September 2025 11:49:47 +0000 (0:00:00.643) 0:02:43.503 ****** 2025-09-19 11:51:45.429750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:51:45.429768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:51:45.429780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:51:45.429819 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:45.429835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:51:45.429847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:51:45.429866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:51:45.429907 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:45.429918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 11:51:45.429929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 11:51:45.429945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 11:51:45.429984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 11:51:45.429995 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:45.430005 | orchestrator | 2025-09-19 11:51:45.430047 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-19 11:51:45.430060 | orchestrator | Friday 19 September 2025 11:49:48 +0000 (0:00:00.859) 0:02:44.362 ****** 2025-09-19 11:51:45.430072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.430083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.430099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.430119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.430137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.430149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.430160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430286 | orchestrator | 2025-09-19 11:51:45.430297 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-19 11:51:45.430308 | orchestrator | Friday 19 September 2025 11:49:54 +0000 (0:00:05.609) 0:02:49.971 ****** 2025-09-19 11:51:45.430318 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-19 11:51:45.430334 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-19 11:51:45.430345 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-19 11:51:45.430355 | orchestrator | 2025-09-19 11:51:45.430366 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-19 11:51:45.430384 | orchestrator | Friday 19 September 2025 11:49:56 +0000 (0:00:01.622) 0:02:51.594 ****** 2025-09-19 11:51:45.430401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.430413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.430425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.430436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.430453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.430471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.430543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.430670 | orchestrator | 2025-09-19 11:51:45.430681 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-19 11:51:45.430692 | orchestrator | Friday 19 September 2025 11:50:11 +0000 (0:00:15.417) 0:03:07.011 ****** 2025-09-19 11:51:45.430703 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.430714 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:45.430725 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:45.430735 | orchestrator | 2025-09-19 11:51:45.430746 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-19 11:51:45.430757 | orchestrator | Friday 19 September 2025 11:50:12 +0000 (0:00:01.436) 0:03:08.447 ****** 2025-09-19 11:51:45.430767 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-19 11:51:45.430778 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-19 11:51:45.430789 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-19 11:51:45.430799 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-19 11:51:45.430810 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-19 11:51:45.430820 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-19 11:51:45.430831 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-19 11:51:45.430841 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-19 11:51:45.430852 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-19 11:51:45.430862 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-19 11:51:45.430873 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-19 11:51:45.430890 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-19 11:51:45.430901 | orchestrator | 2025-09-19 11:51:45.430911 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-19 11:51:45.430922 | orchestrator | Friday 19 September 2025 11:50:18 +0000 (0:00:05.194) 0:03:13.642 ****** 2025-09-19 11:51:45.430933 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-19 11:51:45.430943 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-19 11:51:45.430953 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-19 11:51:45.430964 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-19 11:51:45.430974 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-19 11:51:45.430984 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-19 11:51:45.430993 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-19 11:51:45.431003 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-19 11:51:45.431017 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-19 11:51:45.431026 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-19 11:51:45.431036 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-19 11:51:45.431045 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-19 11:51:45.431054 | orchestrator | 2025-09-19 11:51:45.431064 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-19 11:51:45.431074 | orchestrator | Friday 19 September 2025 11:50:23 +0000 (0:00:05.166) 0:03:18.809 ****** 2025-09-19 11:51:45.431083 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-19 11:51:45.431093 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-19 11:51:45.431102 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-19 11:51:45.431111 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-19 11:51:45.431121 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-19 11:51:45.431130 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-19 11:51:45.431140 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-19 11:51:45.431154 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-19 11:51:45.431164 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-19 11:51:45.431173 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-19 11:51:45.431182 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-19 11:51:45.431192 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-19 11:51:45.431201 | orchestrator | 2025-09-19 11:51:45.431211 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-19 11:51:45.431220 | orchestrator | Friday 19 September 2025 11:50:28 +0000 (0:00:05.219) 0:03:24.028 ****** 2025-09-19 11:51:45.431230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.431246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.431261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 11:51:45.431271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.431286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.431296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-19 11:51:45.431307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.431324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.431335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.431349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.431359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.431374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-19 11:51:45.431385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.431400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.431410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-19 11:51:45.431420 | orchestrator | 2025-09-19 11:51:45.431430 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 11:51:45.431439 | orchestrator | Friday 19 September 2025 11:50:32 +0000 (0:00:03.714) 0:03:27.743 ****** 2025-09-19 11:51:45.431449 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:51:45.431458 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:51:45.431468 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:51:45.431477 | orchestrator | 2025-09-19 11:51:45.431503 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-19 11:51:45.431514 | orchestrator | Friday 19 September 2025 11:50:32 +0000 (0:00:00.305) 0:03:28.049 ****** 2025-09-19 11:51:45.431523 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.431533 | orchestrator | 2025-09-19 11:51:45.431542 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-19 11:51:45.431552 | orchestrator | Friday 19 September 2025 11:50:34 +0000 (0:00:02.263) 0:03:30.313 ****** 2025-09-19 11:51:45.431561 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.431571 | orchestrator | 2025-09-19 11:51:45.431580 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-19 11:51:45.431589 | orchestrator | Friday 19 September 2025 11:50:37 +0000 (0:00:02.581) 0:03:32.894 ****** 2025-09-19 11:51:45.431599 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.431608 | orchestrator | 2025-09-19 11:51:45.431618 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-19 11:51:45.431634 | orchestrator | Friday 19 September 2025 11:50:39 +0000 (0:00:02.290) 0:03:35.185 ****** 2025-09-19 11:51:45.431644 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.431654 | orchestrator | 2025-09-19 11:51:45.431663 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-19 11:51:45.431673 | orchestrator | Friday 19 September 2025 11:50:41 +0000 (0:00:02.286) 0:03:37.471 ****** 2025-09-19 11:51:45.431682 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.431692 | orchestrator | 2025-09-19 11:51:45.431701 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-19 11:51:45.431711 | orchestrator | Friday 19 September 2025 11:51:02 +0000 (0:00:20.632) 0:03:58.104 ****** 2025-09-19 11:51:45.431720 | orchestrator | 2025-09-19 11:51:45.431730 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-19 11:51:45.431739 | orchestrator | Friday 19 September 2025 11:51:02 +0000 (0:00:00.067) 0:03:58.172 ****** 2025-09-19 11:51:45.431749 | orchestrator | 2025-09-19 11:51:45.431758 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-19 11:51:45.431768 | orchestrator | Friday 19 September 2025 11:51:02 +0000 (0:00:00.069) 0:03:58.241 ****** 2025-09-19 11:51:45.431777 | orchestrator | 2025-09-19 11:51:45.431793 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-19 11:51:45.431808 | orchestrator | Friday 19 September 2025 11:51:02 +0000 (0:00:00.066) 0:03:58.308 ****** 2025-09-19 11:51:45.431818 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.431827 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:45.431837 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:45.431846 | orchestrator | 2025-09-19 11:51:45.431856 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-19 11:51:45.431865 | orchestrator | Friday 19 September 2025 11:51:13 +0000 (0:00:11.056) 0:04:09.365 ****** 2025-09-19 11:51:45.431874 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:45.431883 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.431893 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:45.431902 | orchestrator | 2025-09-19 11:51:45.431912 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-19 11:51:45.431921 | orchestrator | Friday 19 September 2025 11:51:24 +0000 (0:00:10.467) 0:04:19.832 ****** 2025-09-19 11:51:45.431931 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.431940 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:45.431949 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:45.431959 | orchestrator | 2025-09-19 11:51:45.431968 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-19 11:51:45.431978 | orchestrator | Friday 19 September 2025 11:51:30 +0000 (0:00:05.817) 0:04:25.650 ****** 2025-09-19 11:51:45.431987 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:45.431996 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:45.432006 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.432015 | orchestrator | 2025-09-19 11:51:45.432025 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-19 11:51:45.432034 | orchestrator | Friday 19 September 2025 11:51:38 +0000 (0:00:08.113) 0:04:33.764 ****** 2025-09-19 11:51:45.432043 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:51:45.432053 | orchestrator | changed: [testbed-node-1] 2025-09-19 11:51:45.432062 | orchestrator | changed: [testbed-node-2] 2025-09-19 11:51:45.432072 | orchestrator | 2025-09-19 11:51:45.432081 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:51:45.432091 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 11:51:45.432101 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:51:45.432111 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 11:51:45.432121 | orchestrator | 2025-09-19 11:51:45.432130 | orchestrator | 2025-09-19 11:51:45.432139 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:51:45.432149 | orchestrator | Friday 19 September 2025 11:51:44 +0000 (0:00:06.082) 0:04:39.846 ****** 2025-09-19 11:51:45.432158 | orchestrator | =============================================================================== 2025-09-19 11:51:45.432168 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.63s 2025-09-19 11:51:45.432177 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.05s 2025-09-19 11:51:45.432186 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.30s 2025-09-19 11:51:45.432196 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.42s 2025-09-19 11:51:45.432205 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.06s 2025-09-19 11:51:45.432214 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.04s 2025-09-19 11:51:45.432224 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.47s 2025-09-19 11:51:45.432233 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.26s 2025-09-19 11:51:45.432246 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.11s 2025-09-19 11:51:45.432256 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.46s 2025-09-19 11:51:45.432265 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.22s 2025-09-19 11:51:45.432274 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.67s 2025-09-19 11:51:45.432283 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 6.08s 2025-09-19 11:51:45.432297 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.00s 2025-09-19 11:51:45.432306 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.82s 2025-09-19 11:51:45.432316 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.82s 2025-09-19 11:51:45.432325 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.68s 2025-09-19 11:51:45.432335 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.61s 2025-09-19 11:51:45.432344 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.60s 2025-09-19 11:51:45.432354 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.22s 2025-09-19 11:51:45.432363 | orchestrator | 2025-09-19 11:51:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:51:48.465028 | orchestrator | 2025-09-19 11:51:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:51:51.507000 | orchestrator | 2025-09-19 11:51:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:51:54.545949 | orchestrator | 2025-09-19 11:51:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:51:57.582795 | orchestrator | 2025-09-19 11:51:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:00.615262 | orchestrator | 2025-09-19 11:52:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:03.650901 | orchestrator | 2025-09-19 11:52:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:06.699080 | orchestrator | 2025-09-19 11:52:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:09.743099 | orchestrator | 2025-09-19 11:52:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:12.785797 | orchestrator | 2025-09-19 11:52:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:15.824294 | orchestrator | 2025-09-19 11:52:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:18.866742 | orchestrator | 2025-09-19 11:52:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:21.914864 | orchestrator | 2025-09-19 11:52:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:24.953790 | orchestrator | 2025-09-19 11:52:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:27.986833 | orchestrator | 2025-09-19 11:52:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:31.030825 | orchestrator | 2025-09-19 11:52:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:34.068239 | orchestrator | 2025-09-19 11:52:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:37.110976 | orchestrator | 2025-09-19 11:52:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:40.154792 | orchestrator | 2025-09-19 11:52:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:43.200766 | orchestrator | 2025-09-19 11:52:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 11:52:46.244922 | orchestrator | 2025-09-19 11:52:46.591800 | orchestrator | 2025-09-19 11:52:46.595134 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Sep 19 11:52:46 UTC 2025 2025-09-19 11:52:46.595171 | orchestrator | 2025-09-19 11:52:47.100094 | orchestrator | ok: Runtime: 0:33:04.565900 2025-09-19 11:52:47.351712 | 2025-09-19 11:52:47.351853 | TASK [Bootstrap services] 2025-09-19 11:52:48.101393 | orchestrator | 2025-09-19 11:52:48.101610 | orchestrator | # BOOTSTRAP 2025-09-19 11:52:48.101640 | orchestrator | 2025-09-19 11:52:48.101653 | orchestrator | + set -e 2025-09-19 11:52:48.101665 | orchestrator | + echo 2025-09-19 11:52:48.101677 | orchestrator | + echo '# BOOTSTRAP' 2025-09-19 11:52:48.101693 | orchestrator | + echo 2025-09-19 11:52:48.101736 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-19 11:52:48.111301 | orchestrator | + set -e 2025-09-19 11:52:48.111342 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-19 11:52:52.732860 | orchestrator | 2025-09-19 11:52:52 | INFO  | It takes a moment until task 9064d044-62e5-498c-8752-fcacf03a6b5a (flavor-manager) has been started and output is visible here. 2025-09-19 11:52:58.579685 | orchestrator | 2025-09-19 11:52:56 | INFO  | Flavor SCS-1V-4 created 2025-09-19 11:52:58.579794 | orchestrator | 2025-09-19 11:52:56 | INFO  | Flavor SCS-2V-8 created 2025-09-19 11:52:58.579812 | orchestrator | 2025-09-19 11:52:56 | INFO  | Flavor SCS-4V-16 created 2025-09-19 11:52:58.579825 | orchestrator | 2025-09-19 11:52:56 | INFO  | Flavor SCS-8V-32 created 2025-09-19 11:52:58.579836 | orchestrator | 2025-09-19 11:52:57 | INFO  | Flavor SCS-1V-2 created 2025-09-19 11:52:58.579848 | orchestrator | 2025-09-19 11:52:57 | INFO  | Flavor SCS-2V-4 created 2025-09-19 11:52:58.579859 | orchestrator | 2025-09-19 11:52:57 | INFO  | Flavor SCS-4V-8 created 2025-09-19 11:52:58.579872 | orchestrator | 2025-09-19 11:52:57 | INFO  | Flavor SCS-8V-16 created 2025-09-19 11:52:58.579896 | orchestrator | 2025-09-19 11:52:57 | INFO  | Flavor SCS-16V-32 created 2025-09-19 11:52:58.579908 | orchestrator | 2025-09-19 11:52:57 | INFO  | Flavor SCS-1V-8 created 2025-09-19 11:52:58.579919 | orchestrator | 2025-09-19 11:52:57 | INFO  | Flavor SCS-2V-16 created 2025-09-19 11:52:58.579930 | orchestrator | 2025-09-19 11:52:57 | INFO  | Flavor SCS-4V-32 created 2025-09-19 11:52:58.579941 | orchestrator | 2025-09-19 11:52:58 | INFO  | Flavor SCS-1L-1 created 2025-09-19 11:52:58.579952 | orchestrator | 2025-09-19 11:52:58 | INFO  | Flavor SCS-2V-4-20s created 2025-09-19 11:52:58.579964 | orchestrator | 2025-09-19 11:52:58 | INFO  | Flavor SCS-4V-16-100s created 2025-09-19 11:53:00.407525 | orchestrator | 2025-09-19 11:53:00 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-19 11:53:10.578340 | orchestrator | 2025-09-19 11:53:10 | INFO  | Task e2cd0bc3-9836-4dd4-bd99-2013aef6ad1d (bootstrap-basic) was prepared for execution. 2025-09-19 11:53:10.578487 | orchestrator | 2025-09-19 11:53:10 | INFO  | It takes a moment until task e2cd0bc3-9836-4dd4-bd99-2013aef6ad1d (bootstrap-basic) has been started and output is visible here. 2025-09-19 11:54:10.737514 | orchestrator | 2025-09-19 11:54:10.737614 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-19 11:54:10.737627 | orchestrator | 2025-09-19 11:54:10.737636 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 11:54:10.737648 | orchestrator | Friday 19 September 2025 11:53:14 +0000 (0:00:00.086) 0:00:00.086 ****** 2025-09-19 11:54:10.737659 | orchestrator | ok: [localhost] 2025-09-19 11:54:10.737668 | orchestrator | 2025-09-19 11:54:10.737676 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-19 11:54:10.737685 | orchestrator | Friday 19 September 2025 11:53:16 +0000 (0:00:01.879) 0:00:01.965 ****** 2025-09-19 11:54:10.737692 | orchestrator | ok: [localhost] 2025-09-19 11:54:10.737701 | orchestrator | 2025-09-19 11:54:10.737708 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-19 11:54:10.737717 | orchestrator | Friday 19 September 2025 11:53:25 +0000 (0:00:08.614) 0:00:10.580 ****** 2025-09-19 11:54:10.737725 | orchestrator | changed: [localhost] 2025-09-19 11:54:10.737733 | orchestrator | 2025-09-19 11:54:10.737741 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-19 11:54:10.737773 | orchestrator | Friday 19 September 2025 11:53:33 +0000 (0:00:07.769) 0:00:18.350 ****** 2025-09-19 11:54:10.737781 | orchestrator | ok: [localhost] 2025-09-19 11:54:10.737789 | orchestrator | 2025-09-19 11:54:10.737797 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-19 11:54:10.737806 | orchestrator | Friday 19 September 2025 11:53:40 +0000 (0:00:07.490) 0:00:25.840 ****** 2025-09-19 11:54:10.737814 | orchestrator | changed: [localhost] 2025-09-19 11:54:10.737822 | orchestrator | 2025-09-19 11:54:10.737829 | orchestrator | TASK [Create public network] *************************************************** 2025-09-19 11:54:10.737837 | orchestrator | Friday 19 September 2025 11:53:47 +0000 (0:00:06.688) 0:00:32.528 ****** 2025-09-19 11:54:10.737845 | orchestrator | changed: [localhost] 2025-09-19 11:54:10.737853 | orchestrator | 2025-09-19 11:54:10.737861 | orchestrator | TASK [Set public network to default] ******************************************* 2025-09-19 11:54:10.737868 | orchestrator | Friday 19 September 2025 11:53:52 +0000 (0:00:05.244) 0:00:37.772 ****** 2025-09-19 11:54:10.737876 | orchestrator | changed: [localhost] 2025-09-19 11:54:10.737885 | orchestrator | 2025-09-19 11:54:10.737893 | orchestrator | TASK [Create public subnet] **************************************************** 2025-09-19 11:54:10.737901 | orchestrator | Friday 19 September 2025 11:53:58 +0000 (0:00:06.247) 0:00:44.020 ****** 2025-09-19 11:54:10.737909 | orchestrator | changed: [localhost] 2025-09-19 11:54:10.737916 | orchestrator | 2025-09-19 11:54:10.737925 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-09-19 11:54:10.737933 | orchestrator | Friday 19 September 2025 11:54:03 +0000 (0:00:04.382) 0:00:48.403 ****** 2025-09-19 11:54:10.737940 | orchestrator | changed: [localhost] 2025-09-19 11:54:10.737948 | orchestrator | 2025-09-19 11:54:10.737956 | orchestrator | TASK [Create manager role] ***************************************************** 2025-09-19 11:54:10.737964 | orchestrator | Friday 19 September 2025 11:54:06 +0000 (0:00:03.801) 0:00:52.204 ****** 2025-09-19 11:54:10.737972 | orchestrator | ok: [localhost] 2025-09-19 11:54:10.737979 | orchestrator | 2025-09-19 11:54:10.737987 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:54:10.737996 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 11:54:10.738007 | orchestrator | 2025-09-19 11:54:10.738096 | orchestrator | 2025-09-19 11:54:10.738114 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:54:10.738128 | orchestrator | Friday 19 September 2025 11:54:10 +0000 (0:00:03.536) 0:00:55.741 ****** 2025-09-19 11:54:10.738140 | orchestrator | =============================================================================== 2025-09-19 11:54:10.738154 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.61s 2025-09-19 11:54:10.738168 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.77s 2025-09-19 11:54:10.738192 | orchestrator | Get volume type local --------------------------------------------------- 7.49s 2025-09-19 11:54:10.738204 | orchestrator | Create volume type local ------------------------------------------------ 6.69s 2025-09-19 11:54:10.738214 | orchestrator | Set public network to default ------------------------------------------- 6.25s 2025-09-19 11:54:10.738222 | orchestrator | Create public network --------------------------------------------------- 5.24s 2025-09-19 11:54:10.738241 | orchestrator | Create public subnet ---------------------------------------------------- 4.38s 2025-09-19 11:54:10.738250 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.80s 2025-09-19 11:54:10.738258 | orchestrator | Create manager role ----------------------------------------------------- 3.54s 2025-09-19 11:54:10.738267 | orchestrator | Gathering Facts --------------------------------------------------------- 1.88s 2025-09-19 11:54:12.998211 | orchestrator | 2025-09-19 11:54:13 | INFO  | It takes a moment until task 9fac314a-beb0-45cb-aca2-251daa34a977 (image-manager) has been started and output is visible here. 2025-09-19 11:54:54.888619 | orchestrator | 2025-09-19 11:54:16 | INFO  | Processing image 'Cirros 0.6.2' 2025-09-19 11:54:54.888748 | orchestrator | 2025-09-19 11:54:17 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-09-19 11:54:54.888765 | orchestrator | 2025-09-19 11:54:17 | INFO  | Importing image Cirros 0.6.2 2025-09-19 11:54:54.888777 | orchestrator | 2025-09-19 11:54:17 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-19 11:54:54.888789 | orchestrator | 2025-09-19 11:54:19 | INFO  | Waiting for image to leave queued state... 2025-09-19 11:54:54.888801 | orchestrator | 2025-09-19 11:54:21 | INFO  | Waiting for import to complete... 2025-09-19 11:54:54.888812 | orchestrator | 2025-09-19 11:54:31 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-09-19 11:54:54.888823 | orchestrator | 2025-09-19 11:54:31 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-09-19 11:54:54.888834 | orchestrator | 2025-09-19 11:54:31 | INFO  | Setting internal_version = 0.6.2 2025-09-19 11:54:54.888846 | orchestrator | 2025-09-19 11:54:31 | INFO  | Setting image_original_user = cirros 2025-09-19 11:54:54.888857 | orchestrator | 2025-09-19 11:54:31 | INFO  | Adding tag os:cirros 2025-09-19 11:54:54.888868 | orchestrator | 2025-09-19 11:54:31 | INFO  | Setting property architecture: x86_64 2025-09-19 11:54:54.888939 | orchestrator | 2025-09-19 11:54:32 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 11:54:54.888952 | orchestrator | 2025-09-19 11:54:32 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 11:54:54.888963 | orchestrator | 2025-09-19 11:54:32 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 11:54:54.888974 | orchestrator | 2025-09-19 11:54:32 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 11:54:54.888985 | orchestrator | 2025-09-19 11:54:33 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 11:54:54.888996 | orchestrator | 2025-09-19 11:54:33 | INFO  | Setting property os_distro: cirros 2025-09-19 11:54:54.889006 | orchestrator | 2025-09-19 11:54:33 | INFO  | Setting property replace_frequency: never 2025-09-19 11:54:54.889017 | orchestrator | 2025-09-19 11:54:33 | INFO  | Setting property uuid_validity: none 2025-09-19 11:54:54.889028 | orchestrator | 2025-09-19 11:54:33 | INFO  | Setting property provided_until: none 2025-09-19 11:54:54.889039 | orchestrator | 2025-09-19 11:54:34 | INFO  | Setting property image_description: Cirros 2025-09-19 11:54:54.889050 | orchestrator | 2025-09-19 11:54:34 | INFO  | Setting property image_name: Cirros 2025-09-19 11:54:54.889060 | orchestrator | 2025-09-19 11:54:34 | INFO  | Setting property internal_version: 0.6.2 2025-09-19 11:54:54.889071 | orchestrator | 2025-09-19 11:54:35 | INFO  | Setting property image_original_user: cirros 2025-09-19 11:54:54.889082 | orchestrator | 2025-09-19 11:54:35 | INFO  | Setting property os_version: 0.6.2 2025-09-19 11:54:54.889102 | orchestrator | 2025-09-19 11:54:35 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-19 11:54:54.889115 | orchestrator | 2025-09-19 11:54:35 | INFO  | Setting property image_build_date: 2023-05-30 2025-09-19 11:54:54.889125 | orchestrator | 2025-09-19 11:54:36 | INFO  | Checking status of 'Cirros 0.6.2' 2025-09-19 11:54:54.889136 | orchestrator | 2025-09-19 11:54:36 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-09-19 11:54:54.889147 | orchestrator | 2025-09-19 11:54:36 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-09-19 11:54:54.889159 | orchestrator | 2025-09-19 11:54:36 | INFO  | Processing image 'Cirros 0.6.3' 2025-09-19 11:54:54.889180 | orchestrator | 2025-09-19 11:54:36 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-09-19 11:54:54.889193 | orchestrator | 2025-09-19 11:54:36 | INFO  | Importing image Cirros 0.6.3 2025-09-19 11:54:54.889206 | orchestrator | 2025-09-19 11:54:36 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-19 11:54:54.889218 | orchestrator | 2025-09-19 11:54:37 | INFO  | Waiting for image to leave queued state... 2025-09-19 11:54:54.889230 | orchestrator | 2025-09-19 11:54:39 | INFO  | Waiting for import to complete... 2025-09-19 11:54:54.889246 | orchestrator | 2025-09-19 11:54:49 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-09-19 11:54:54.889275 | orchestrator | 2025-09-19 11:54:50 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-09-19 11:54:54.889287 | orchestrator | 2025-09-19 11:54:50 | INFO  | Setting internal_version = 0.6.3 2025-09-19 11:54:54.889299 | orchestrator | 2025-09-19 11:54:50 | INFO  | Setting image_original_user = cirros 2025-09-19 11:54:54.889311 | orchestrator | 2025-09-19 11:54:50 | INFO  | Adding tag os:cirros 2025-09-19 11:54:54.889324 | orchestrator | 2025-09-19 11:54:50 | INFO  | Setting property architecture: x86_64 2025-09-19 11:54:54.889336 | orchestrator | 2025-09-19 11:54:50 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 11:54:54.889349 | orchestrator | 2025-09-19 11:54:50 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 11:54:54.889360 | orchestrator | 2025-09-19 11:54:51 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 11:54:54.889371 | orchestrator | 2025-09-19 11:54:51 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 11:54:54.889382 | orchestrator | 2025-09-19 11:54:51 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 11:54:54.889393 | orchestrator | 2025-09-19 11:54:51 | INFO  | Setting property os_distro: cirros 2025-09-19 11:54:54.889404 | orchestrator | 2025-09-19 11:54:51 | INFO  | Setting property replace_frequency: never 2025-09-19 11:54:54.889415 | orchestrator | 2025-09-19 11:54:52 | INFO  | Setting property uuid_validity: none 2025-09-19 11:54:54.889425 | orchestrator | 2025-09-19 11:54:52 | INFO  | Setting property provided_until: none 2025-09-19 11:54:54.889436 | orchestrator | 2025-09-19 11:54:52 | INFO  | Setting property image_description: Cirros 2025-09-19 11:54:54.889447 | orchestrator | 2025-09-19 11:54:52 | INFO  | Setting property image_name: Cirros 2025-09-19 11:54:54.889457 | orchestrator | 2025-09-19 11:54:52 | INFO  | Setting property internal_version: 0.6.3 2025-09-19 11:54:54.889468 | orchestrator | 2025-09-19 11:54:53 | INFO  | Setting property image_original_user: cirros 2025-09-19 11:54:54.889479 | orchestrator | 2025-09-19 11:54:53 | INFO  | Setting property os_version: 0.6.3 2025-09-19 11:54:54.889489 | orchestrator | 2025-09-19 11:54:53 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-19 11:54:54.889500 | orchestrator | 2025-09-19 11:54:53 | INFO  | Setting property image_build_date: 2024-09-26 2025-09-19 11:54:54.889511 | orchestrator | 2025-09-19 11:54:54 | INFO  | Checking status of 'Cirros 0.6.3' 2025-09-19 11:54:54.889522 | orchestrator | 2025-09-19 11:54:54 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-09-19 11:54:54.889533 | orchestrator | 2025-09-19 11:54:54 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-09-19 11:54:55.168410 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-09-19 11:54:57.157738 | orchestrator | 2025-09-19 11:54:57 | INFO  | date: 2025-09-19 2025-09-19 11:54:57.157965 | orchestrator | 2025-09-19 11:54:57 | INFO  | image: octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 11:54:57.158172 | orchestrator | 2025-09-19 11:54:57 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 11:54:57.158211 | orchestrator | 2025-09-19 11:54:57 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2.CHECKSUM 2025-09-19 11:54:57.185780 | orchestrator | 2025-09-19 11:54:57 | INFO  | checksum: cb1f8a9bf0aeb0e92074b04499e688b0043001241167a8bf8df49931cc66885f 2025-09-19 11:54:57.265808 | orchestrator | 2025-09-19 11:54:57 | INFO  | It takes a moment until task f676da7d-df86-4747-8442-f48a006fac0e (image-manager) has been started and output is visible here. 2025-09-19 11:55:57.177446 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-09-19 11:55:57.177618 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-09-19 11:55:57.177638 | orchestrator | 2025-09-19 11:54:59 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 11:55:57.177657 | orchestrator | 2025-09-19 11:54:59 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2: 200 2025-09-19 11:55:57.177670 | orchestrator | 2025-09-19 11:54:59 | INFO  | Importing image OpenStack Octavia Amphora 2025-09-19 2025-09-19 11:55:57.177682 | orchestrator | 2025-09-19 11:54:59 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 11:55:57.177695 | orchestrator | 2025-09-19 11:54:59 | INFO  | Waiting for image to leave queued state... 2025-09-19 11:55:57.177706 | orchestrator | 2025-09-19 11:55:01 | INFO  | Waiting for import to complete... 2025-09-19 11:55:57.177717 | orchestrator | 2025-09-19 11:55:11 | INFO  | Waiting for import to complete... 2025-09-19 11:55:57.177728 | orchestrator | 2025-09-19 11:55:21 | INFO  | Waiting for import to complete... 2025-09-19 11:55:57.177739 | orchestrator | 2025-09-19 11:55:31 | INFO  | Waiting for import to complete... 2025-09-19 11:55:57.177750 | orchestrator | 2025-09-19 11:55:42 | INFO  | Waiting for import to complete... 2025-09-19 11:55:57.177761 | orchestrator | 2025-09-19 11:55:52 | INFO  | Import of 'OpenStack Octavia Amphora 2025-09-19' successfully completed, reloading images 2025-09-19 11:55:57.177773 | orchestrator | 2025-09-19 11:55:52 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 11:55:57.177785 | orchestrator | 2025-09-19 11:55:52 | INFO  | Setting internal_version = 2025-09-19 2025-09-19 11:55:57.177796 | orchestrator | 2025-09-19 11:55:52 | INFO  | Setting image_original_user = ubuntu 2025-09-19 11:55:57.177807 | orchestrator | 2025-09-19 11:55:52 | INFO  | Adding tag amphora 2025-09-19 11:55:57.177818 | orchestrator | 2025-09-19 11:55:52 | INFO  | Adding tag os:ubuntu 2025-09-19 11:55:57.177829 | orchestrator | 2025-09-19 11:55:53 | INFO  | Setting property architecture: x86_64 2025-09-19 11:55:57.177863 | orchestrator | 2025-09-19 11:55:53 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 11:55:57.177885 | orchestrator | 2025-09-19 11:55:53 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 11:55:57.177897 | orchestrator | 2025-09-19 11:55:53 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 11:55:57.177908 | orchestrator | 2025-09-19 11:55:53 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 11:55:57.177919 | orchestrator | 2025-09-19 11:55:54 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 11:55:57.177930 | orchestrator | 2025-09-19 11:55:54 | INFO  | Setting property os_distro: ubuntu 2025-09-19 11:55:57.177940 | orchestrator | 2025-09-19 11:55:54 | INFO  | Setting property replace_frequency: quarterly 2025-09-19 11:55:57.177951 | orchestrator | 2025-09-19 11:55:54 | INFO  | Setting property uuid_validity: last-1 2025-09-19 11:55:57.177962 | orchestrator | 2025-09-19 11:55:55 | INFO  | Setting property provided_until: none 2025-09-19 11:55:57.177974 | orchestrator | 2025-09-19 11:55:55 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-09-19 11:55:57.177987 | orchestrator | 2025-09-19 11:55:55 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-09-19 11:55:57.177999 | orchestrator | 2025-09-19 11:55:55 | INFO  | Setting property internal_version: 2025-09-19 2025-09-19 11:55:57.178011 | orchestrator | 2025-09-19 11:55:55 | INFO  | Setting property image_original_user: ubuntu 2025-09-19 11:55:57.178089 | orchestrator | 2025-09-19 11:55:56 | INFO  | Setting property os_version: 2025-09-19 2025-09-19 11:55:57.178103 | orchestrator | 2025-09-19 11:55:56 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 11:55:57.178134 | orchestrator | 2025-09-19 11:55:56 | INFO  | Setting property image_build_date: 2025-09-19 2025-09-19 11:55:57.178147 | orchestrator | 2025-09-19 11:55:56 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 11:55:57.178160 | orchestrator | 2025-09-19 11:55:56 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 11:55:57.178172 | orchestrator | 2025-09-19 11:55:56 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-09-19 11:55:57.178185 | orchestrator | 2025-09-19 11:55:56 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-09-19 11:55:57.178199 | orchestrator | 2025-09-19 11:55:56 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-09-19 11:55:57.178211 | orchestrator | 2025-09-19 11:55:56 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-09-19 11:55:57.528649 | orchestrator | ok: Runtime: 0:03:09.749428 2025-09-19 11:55:57.581772 | 2025-09-19 11:55:57.581900 | TASK [Run checks] 2025-09-19 11:55:58.239639 | orchestrator | + set -e 2025-09-19 11:55:58.239829 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 11:55:58.239853 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 11:55:58.239874 | orchestrator | ++ INTERACTIVE=false 2025-09-19 11:55:58.239888 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 11:55:58.239900 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 11:55:58.239914 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 11:55:58.240925 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 11:55:58.247571 | orchestrator | 2025-09-19 11:55:58.247623 | orchestrator | # CHECK 2025-09-19 11:55:58.247636 | orchestrator | 2025-09-19 11:55:58.247648 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 11:55:58.247664 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 11:55:58.247675 | orchestrator | + echo 2025-09-19 11:55:58.247686 | orchestrator | + echo '# CHECK' 2025-09-19 11:55:58.247697 | orchestrator | + echo 2025-09-19 11:55:58.247712 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 11:55:58.248228 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 11:55:58.308829 | orchestrator | 2025-09-19 11:55:58.308909 | orchestrator | ## Containers @ testbed-manager 2025-09-19 11:55:58.308930 | orchestrator | 2025-09-19 11:55:58.308951 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 11:55:58.308970 | orchestrator | + echo 2025-09-19 11:55:58.308989 | orchestrator | + echo '## Containers @ testbed-manager' 2025-09-19 11:55:58.309008 | orchestrator | + echo 2025-09-19 11:55:58.309026 | orchestrator | + osism container testbed-manager ps 2025-09-19 11:56:00.550837 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 11:56:00.550974 | orchestrator | a9516b8dc46f registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-09-19 11:56:00.550998 | orchestrator | d98fdd8bb5fa registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711 "dumb-init --single-…" 14 minutes ago Up 13 minutes prometheus_alertmanager 2025-09-19 11:56:00.551016 | orchestrator | 795b146bb48d registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-19 11:56:00.551038 | orchestrator | 0e8e1073103d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-09-19 11:56:00.551099 | orchestrator | bda53040d7d7 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2025-09-19 11:56:00.551112 | orchestrator | f75520576d60 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 17 minutes ago Up 16 minutes cephclient 2025-09-19 11:56:00.551127 | orchestrator | 36f8bb909a8f registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-09-19 11:56:00.551139 | orchestrator | d5d7f38d59a7 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 28 minutes kolla_toolbox 2025-09-19 11:56:00.551149 | orchestrator | 8003dbc5cea6 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-09-19 11:56:00.551182 | orchestrator | 29bf5c7496b2 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 30 minutes ago Up 29 minutes (healthy) 80/tcp phpmyadmin 2025-09-19 11:56:00.551193 | orchestrator | abb307a00aa6 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 30 minutes openstackclient 2025-09-19 11:56:00.551203 | orchestrator | f073f058bf1a registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 30 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2025-09-19 11:56:00.551215 | orchestrator | 81ad8c84c5f2 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 53 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-09-19 11:56:00.551230 | orchestrator | 68ccd7e20df9 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" 57 minutes ago Up 36 minutes (healthy) manager-inventory_reconciler-1 2025-09-19 11:56:00.551260 | orchestrator | 245234fef35f registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" 57 minutes ago Up 37 minutes (healthy) ceph-ansible 2025-09-19 11:56:00.551271 | orchestrator | 891117bffd97 registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" 57 minutes ago Up 37 minutes (healthy) kolla-ansible 2025-09-19 11:56:00.551281 | orchestrator | 5530803a867d registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" 57 minutes ago Up 37 minutes (healthy) osism-ansible 2025-09-19 11:56:00.551291 | orchestrator | 720fce8abb5d registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" 57 minutes ago Up 37 minutes (healthy) osism-kubernetes 2025-09-19 11:56:00.551301 | orchestrator | ba07de0594ed registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 57 minutes ago Up 37 minutes (healthy) 8000/tcp manager-ara-server-1 2025-09-19 11:56:00.551312 | orchestrator | aedf3c48b38f registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" 57 minutes ago Up 37 minutes (healthy) osismclient 2025-09-19 11:56:00.551322 | orchestrator | 6bab5617a7a1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-09-19 11:56:00.551332 | orchestrator | 3be937f803d6 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-listener-1 2025-09-19 11:56:00.551346 | orchestrator | ea40caee9b38 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-openstack-1 2025-09-19 11:56:00.551977 | orchestrator | 866529d51d84 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-flower-1 2025-09-19 11:56:00.552000 | orchestrator | e8889aa624ac registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 57 minutes ago Up 37 minutes (healthy) manager-beat-1 2025-09-19 11:56:00.552010 | orchestrator | 58e609372d17 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 57 minutes ago Up 37 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-09-19 11:56:00.552021 | orchestrator | 1302201c877c registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" 57 minutes ago Up 37 minutes (healthy) 3306/tcp manager-mariadb-1 2025-09-19 11:56:00.552030 | orchestrator | 96927908a5b6 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 57 minutes ago Up 37 minutes (healthy) 6379/tcp manager-redis-1 2025-09-19 11:56:00.552040 | orchestrator | cd792a4d3073 registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-09-19 11:56:00.827128 | orchestrator | 2025-09-19 11:56:00.827256 | orchestrator | ## Images @ testbed-manager 2025-09-19 11:56:00.827286 | orchestrator | 2025-09-19 11:56:00.827307 | orchestrator | + echo 2025-09-19 11:56:00.827327 | orchestrator | + echo '## Images @ testbed-manager' 2025-09-19 11:56:00.827349 | orchestrator | + echo 2025-09-19 11:56:00.827371 | orchestrator | + osism container testbed-manager images 2025-09-19 11:56:02.944051 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 11:56:02.944147 | orchestrator | registry.osism.tech/osism/osism-frontend latest 0e15c54d8d9c 3 hours ago 236MB 2025-09-19 11:56:02.944164 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 84cc807d7f93 9 hours ago 243MB 2025-09-19 11:56:02.944196 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d3334946e20e 6 weeks ago 11.5MB 2025-09-19 11:56:02.944208 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250711.0 fcbac8373342 2 months ago 571MB 2025-09-19 11:56:02.944220 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-19 11:56:02.944231 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-19 11:56:02.944242 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-19 11:56:02.944252 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250711 cb02c47a5187 2 months ago 891MB 2025-09-19 11:56:02.944263 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250711 0ac8facfe451 2 months ago 360MB 2025-09-19 11:56:02.944274 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250711 6c4eef6335f5 2 months ago 456MB 2025-09-19 11:56:02.944304 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-19 11:56:02.944335 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-19 11:56:02.944347 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250711.0 7b0f9e78b4e4 2 months ago 575MB 2025-09-19 11:56:02.944368 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250711.0 f677f8f8094b 2 months ago 535MB 2025-09-19 11:56:02.944379 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250711.0 8fcfa643b744 2 months ago 308MB 2025-09-19 11:56:02.944390 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250711.0 267f92fc46f6 2 months ago 1.21GB 2025-09-19 11:56:02.944401 | orchestrator | registry.osism.tech/osism/osism 0.20250709.0 ccd699d89870 2 months ago 310MB 2025-09-19 11:56:02.944412 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 2 months ago 41.4MB 2025-09-19 11:56:02.944422 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 2 months ago 226MB 2025-09-19 11:56:02.944433 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 dae0c92b7b63 3 months ago 329MB 2025-09-19 11:56:02.944444 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 4 months ago 453MB 2025-09-19 11:56:02.944454 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 7 months ago 571MB 2025-09-19 11:56:02.944465 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 12 months ago 300MB 2025-09-19 11:56:02.944476 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 15 months ago 146MB 2025-09-19 11:56:03.116871 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 11:56:03.117150 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 11:56:03.153518 | orchestrator | 2025-09-19 11:56:03.153604 | orchestrator | ## Containers @ testbed-node-0 2025-09-19 11:56:03.153618 | orchestrator | 2025-09-19 11:56:03.153627 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 11:56:03.153637 | orchestrator | + echo 2025-09-19 11:56:03.153646 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-09-19 11:56:03.153656 | orchestrator | + echo 2025-09-19 11:56:03.153665 | orchestrator | + osism container testbed-node-0 ps 2025-09-19 11:56:05.203850 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 11:56:05.203951 | orchestrator | 0c7ed6a81d39 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-19 11:56:05.203979 | orchestrator | 1912411923eb registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-19 11:56:05.203993 | orchestrator | 978085b5d817 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-09-19 11:56:05.204004 | orchestrator | 388bdd7d0645 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-09-19 11:56:05.204015 | orchestrator | 1579314dbae7 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 4 minutes (healthy) octavia_api 2025-09-19 11:56:05.204046 | orchestrator | 377cc69faf96 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-09-19 11:56:05.204058 | orchestrator | 3b8d69fab9ca registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-09-19 11:56:05.204094 | orchestrator | fe36211d7462 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-19 11:56:05.204113 | orchestrator | 5223a591a381 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-09-19 11:56:05.204132 | orchestrator | ae0713a5e6f0 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-09-19 11:56:05.204152 | orchestrator | 0856744d338f registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-09-19 11:56:05.204170 | orchestrator | 9ed2a6d01c82 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-09-19 11:56:05.204183 | orchestrator | 3fbe2df6198d registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-19 11:56:05.204194 | orchestrator | e672bb7377b7 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-09-19 11:56:05.204210 | orchestrator | d35b48d11cea registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-19 11:56:05.204230 | orchestrator | d24965781391 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-09-19 11:56:05.204249 | orchestrator | 3b934ce6967d registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-09-19 11:56:05.204260 | orchestrator | 9a7154fc0228 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-09-19 11:56:05.204271 | orchestrator | 82209ded98c1 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-09-19 11:56:05.204301 | orchestrator | 8df6a462949e registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-09-19 11:56:05.204313 | orchestrator | 27c96b2e70ea registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-09-19 11:56:05.204324 | orchestrator | ce6d3fdbdf41 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-19 11:56:05.204334 | orchestrator | c1d32272190a registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-09-19 11:56:05.204345 | orchestrator | 6d55882a16d9 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-09-19 11:56:05.204362 | orchestrator | b5a5ba6f316f registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-09-19 11:56:05.204388 | orchestrator | fba578771c4e registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-09-19 11:56:05.204408 | orchestrator | cb7225db6f50 registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-09-19 11:56:05.204427 | orchestrator | c661a7628479 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-19 11:56:05.204451 | orchestrator | 96cf9a0da42f registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-09-19 11:56:05.204470 | orchestrator | b62abc8d38d0 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-09-19 11:56:05.204481 | orchestrator | 87429b37d58e registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-09-19 11:56:05.204492 | orchestrator | 5c2a03f2a56e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-09-19 11:56:05.204503 | orchestrator | e95e16bf2c33 registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-09-19 11:56:05.204514 | orchestrator | 8cdb0154bc35 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-09-19 11:56:05.204525 | orchestrator | c10bc2963f77 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-09-19 11:56:05.204541 | orchestrator | dd733bc0352f registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-09-19 11:56:05.204552 | orchestrator | 4e529cd06d95 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-09-19 11:56:05.204563 | orchestrator | 3bfde2c3ca03 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-09-19 11:56:05.204574 | orchestrator | 8d5cbbd80534 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-09-19 11:56:05.204584 | orchestrator | 75e92c13dbf4 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-09-19 11:56:05.204643 | orchestrator | f16ccf4674a0 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2025-09-19 11:56:05.204656 | orchestrator | c44fcee61d26 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-09-19 11:56:05.204667 | orchestrator | f3648092ef02 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-09-19 11:56:05.204687 | orchestrator | 21284df69c01 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-09-19 11:56:05.204698 | orchestrator | aff72ff04073 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-09-19 11:56:05.204709 | orchestrator | c31e977ded80 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-09-19 11:56:05.204720 | orchestrator | 30f44bd1d1d3 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-09-19 11:56:05.204730 | orchestrator | d78e8231ce7e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2025-09-19 11:56:05.204741 | orchestrator | 910d4f36c107 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-09-19 11:56:05.204752 | orchestrator | 51b3f210b4da registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-09-19 11:56:05.204763 | orchestrator | 95b2445623f7 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-09-19 11:56:05.204774 | orchestrator | d135cd5fb710 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-09-19 11:56:05.204785 | orchestrator | eb89500a224b registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-09-19 11:56:05.204796 | orchestrator | fb53357bb472 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-09-19 11:56:05.204807 | orchestrator | 1aa23fe0844a registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-09-19 11:56:05.204817 | orchestrator | 4886c56bf69d registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-09-19 11:56:05.204828 | orchestrator | af9e30b59fcb registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-09-19 11:56:05.383905 | orchestrator | 2025-09-19 11:56:05.383988 | orchestrator | ## Images @ testbed-node-0 2025-09-19 11:56:05.384003 | orchestrator | 2025-09-19 11:56:05.384013 | orchestrator | + echo 2025-09-19 11:56:05.384024 | orchestrator | + echo '## Images @ testbed-node-0' 2025-09-19 11:56:05.384035 | orchestrator | + echo 2025-09-19 11:56:05.384045 | orchestrator | + osism container testbed-node-0 images 2025-09-19 11:56:07.323278 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 11:56:07.323434 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-19 11:56:07.323461 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 2 months ago 329MB 2025-09-19 11:56:07.323476 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 2 months ago 326MB 2025-09-19 11:56:07.323509 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 2 months ago 1.59GB 2025-09-19 11:56:07.323521 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 2 months ago 1.55GB 2025-09-19 11:56:07.323532 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 2 months ago 417MB 2025-09-19 11:56:07.323542 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 2 months ago 318MB 2025-09-19 11:56:07.323565 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-19 11:56:07.323577 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 2 months ago 375MB 2025-09-19 11:56:07.323588 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 2 months ago 1.01GB 2025-09-19 11:56:07.323598 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-19 11:56:07.323610 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 2 months ago 361MB 2025-09-19 11:56:07.323649 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 2 months ago 361MB 2025-09-19 11:56:07.323662 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 2 months ago 1.21GB 2025-09-19 11:56:07.323673 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 2 months ago 353MB 2025-09-19 11:56:07.323683 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-19 11:56:07.323694 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 2 months ago 344MB 2025-09-19 11:56:07.323705 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-19 11:56:07.323715 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 2 months ago 324MB 2025-09-19 11:56:07.323726 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 2 months ago 351MB 2025-09-19 11:56:07.323736 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 2 months ago 324MB 2025-09-19 11:56:07.323747 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 2 months ago 590MB 2025-09-19 11:56:07.323757 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 2 months ago 947MB 2025-09-19 11:56:07.323768 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 2 months ago 946MB 2025-09-19 11:56:07.323779 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 2 months ago 947MB 2025-09-19 11:56:07.323789 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 2 months ago 946MB 2025-09-19 11:56:07.323800 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250711 05a4552273f6 2 months ago 1.04GB 2025-09-19 11:56:07.323810 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250711 41f8c34132c7 2 months ago 1.04GB 2025-09-19 11:56:07.323821 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 2 months ago 1.1GB 2025-09-19 11:56:07.323832 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 2 months ago 1.1GB 2025-09-19 11:56:07.323855 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 2 months ago 1.12GB 2025-09-19 11:56:07.323884 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 2 months ago 1.1GB 2025-09-19 11:56:07.323903 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 2 months ago 1.12GB 2025-09-19 11:56:07.323920 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 2 months ago 1.15GB 2025-09-19 11:56:07.323940 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 2 months ago 1.04GB 2025-09-19 11:56:07.323960 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 2 months ago 1.06GB 2025-09-19 11:56:07.323974 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 2 months ago 1.06GB 2025-09-19 11:56:07.323986 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 2 months ago 1.06GB 2025-09-19 11:56:07.323999 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 2 months ago 1.41GB 2025-09-19 11:56:07.324011 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 2 months ago 1.41GB 2025-09-19 11:56:07.324023 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 2 months ago 1.29GB 2025-09-19 11:56:07.324036 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 2 months ago 1.42GB 2025-09-19 11:56:07.324048 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 2 months ago 1.29GB 2025-09-19 11:56:07.324059 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 2 months ago 1.29GB 2025-09-19 11:56:07.324071 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 2 months ago 1.2GB 2025-09-19 11:56:07.324083 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 2 months ago 1.31GB 2025-09-19 11:56:07.324096 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 2 months ago 1.05GB 2025-09-19 11:56:07.324108 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 2 months ago 1.05GB 2025-09-19 11:56:07.324120 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 2 months ago 1.05GB 2025-09-19 11:56:07.324133 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 2 months ago 1.06GB 2025-09-19 11:56:07.324146 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 2 months ago 1.06GB 2025-09-19 11:56:07.324158 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 2 months ago 1.05GB 2025-09-19 11:56:07.324170 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250711 f2e37439c6b7 2 months ago 1.11GB 2025-09-19 11:56:07.324183 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250711 b3d19c53d4de 2 months ago 1.11GB 2025-09-19 11:56:07.324195 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 2 months ago 1.11GB 2025-09-19 11:56:07.324206 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 2 months ago 1.13GB 2025-09-19 11:56:07.324225 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 2 months ago 1.11GB 2025-09-19 11:56:07.324236 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 2 months ago 1.24GB 2025-09-19 11:56:07.324247 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250711 c26d685bbc69 2 months ago 1.04GB 2025-09-19 11:56:07.324257 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250711 55a7448b63ad 2 months ago 1.04GB 2025-09-19 11:56:07.324273 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250711 b8a4d60cb725 2 months ago 1.04GB 2025-09-19 11:56:07.324284 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250711 c0822bfcb81c 2 months ago 1.04GB 2025-09-19 11:56:07.324294 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 4 months ago 1.27GB 2025-09-19 11:56:07.504706 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 11:56:07.504975 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 11:56:07.551104 | orchestrator | 2025-09-19 11:56:07.551173 | orchestrator | ## Containers @ testbed-node-1 2025-09-19 11:56:07.551184 | orchestrator | 2025-09-19 11:56:07.551193 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 11:56:07.551202 | orchestrator | + echo 2025-09-19 11:56:07.551211 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-09-19 11:56:07.551221 | orchestrator | + echo 2025-09-19 11:56:07.551230 | orchestrator | + osism container testbed-node-1 ps 2025-09-19 11:56:09.636685 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 11:56:09.636774 | orchestrator | 7aa2494b2450 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-19 11:56:09.636790 | orchestrator | 856f235c66d8 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-19 11:56:09.636802 | orchestrator | 939115542de4 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-09-19 11:56:09.636813 | orchestrator | 6588a998e577 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-09-19 11:56:09.636825 | orchestrator | 4ce649a98da6 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-09-19 11:56:09.636836 | orchestrator | db62e9446456 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-09-19 11:56:09.636847 | orchestrator | 9621c6345616 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-09-19 11:56:09.636859 | orchestrator | 3f01079d2cd8 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-09-19 11:56:09.636870 | orchestrator | aa157d361f25 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-09-19 11:56:09.636881 | orchestrator | debf3f8dc5a8 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-09-19 11:56:09.636913 | orchestrator | df75d269319e registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-09-19 11:56:09.636925 | orchestrator | 911f9cf93fdf registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-09-19 11:56:09.636936 | orchestrator | 45800fe5a331 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-19 11:56:09.636947 | orchestrator | 1dbfe7b3ec78 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-09-19 11:56:09.636958 | orchestrator | 693bfb6b044e registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-19 11:56:09.636969 | orchestrator | 8b797ab7ff98 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-09-19 11:56:09.636999 | orchestrator | 2951cff21884 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-09-19 11:56:09.637078 | orchestrator | b75141f91321 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-09-19 11:56:09.637092 | orchestrator | 47e03fe3de36 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-09-19 11:56:09.637121 | orchestrator | b33d3141cb47 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-09-19 11:56:09.637133 | orchestrator | e248d0acce75 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-19 11:56:09.637144 | orchestrator | 14e81df54845 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-09-19 11:56:09.637155 | orchestrator | 995e6af37fb7 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) barbican_api 2025-09-19 11:56:09.637166 | orchestrator | 419a1b2e5571 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-09-19 11:56:09.637177 | orchestrator | 756534721d73 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-09-19 11:56:09.637188 | orchestrator | 3a0b2753d12b registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-09-19 11:56:09.637199 | orchestrator | bd746733c768 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-09-19 11:56:09.637211 | orchestrator | 0360eea497f3 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-19 11:56:09.637233 | orchestrator | ebd541a1b675 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-09-19 11:56:09.637247 | orchestrator | e325dac0aa7e registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-09-19 11:56:09.637259 | orchestrator | feb053a16b36 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-09-19 11:56:09.637271 | orchestrator | 765f5792fef1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-09-19 11:56:09.637285 | orchestrator | 0ef37e2959ab registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-09-19 11:56:09.637297 | orchestrator | 2f073175110e registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-09-19 11:56:09.637309 | orchestrator | 7c7600f9e002 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-09-19 11:56:09.637321 | orchestrator | 2e91ae676e7e registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-09-19 11:56:09.637334 | orchestrator | d5bdac9dcf84 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-09-19 11:56:09.637346 | orchestrator | 8d89c71dedac registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-09-19 11:56:09.637359 | orchestrator | e1d5f392ecca registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-09-19 11:56:09.637377 | orchestrator | 1a759f20d7da registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-09-19 11:56:09.637481 | orchestrator | c8e86709c0b7 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-09-19 11:56:09.637496 | orchestrator | 82ec9a37b258 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-09-19 11:56:09.637507 | orchestrator | 3a4637cbb5ea registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-09-19 11:56:09.637518 | orchestrator | da878c288257 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-09-19 11:56:09.637529 | orchestrator | 8f16cb6414d6 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-09-19 11:56:09.637540 | orchestrator | 3878841d4098 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-09-19 11:56:09.637551 | orchestrator | 92b4d11469c2 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-09-19 11:56:09.637569 | orchestrator | 53cc38949c3b registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-09-19 11:56:09.637580 | orchestrator | cd76542fbd87 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2025-09-19 11:56:09.637591 | orchestrator | 74350eb92148 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-09-19 11:56:09.637602 | orchestrator | a9f4ae95743c registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-09-19 11:56:09.637613 | orchestrator | 5452e6238896 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-09-19 11:56:09.637624 | orchestrator | 5b41d9c55593 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-09-19 11:56:09.637635 | orchestrator | 5430dd61de85 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-09-19 11:56:09.637668 | orchestrator | 789ff467ef54 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-09-19 11:56:09.637680 | orchestrator | d20328cdb5a8 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-09-19 11:56:09.637691 | orchestrator | 054099f08ba9 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-09-19 11:56:09.812761 | orchestrator | 2025-09-19 11:56:09.812829 | orchestrator | ## Images @ testbed-node-1 2025-09-19 11:56:09.812841 | orchestrator | 2025-09-19 11:56:09.812850 | orchestrator | + echo 2025-09-19 11:56:09.812860 | orchestrator | + echo '## Images @ testbed-node-1' 2025-09-19 11:56:09.812870 | orchestrator | + echo 2025-09-19 11:56:09.812879 | orchestrator | + osism container testbed-node-1 images 2025-09-19 11:56:11.951166 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 11:56:11.951265 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-19 11:56:11.951279 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 2 months ago 329MB 2025-09-19 11:56:11.951290 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 2 months ago 326MB 2025-09-19 11:56:11.951302 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 2 months ago 1.59GB 2025-09-19 11:56:11.951313 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 2 months ago 1.55GB 2025-09-19 11:56:11.951323 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 2 months ago 417MB 2025-09-19 11:56:11.951334 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 2 months ago 318MB 2025-09-19 11:56:11.951345 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 2 months ago 375MB 2025-09-19 11:56:11.951381 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-19 11:56:11.951393 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 2 months ago 1.01GB 2025-09-19 11:56:11.951404 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-19 11:56:11.951431 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 2 months ago 361MB 2025-09-19 11:56:11.951442 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 2 months ago 361MB 2025-09-19 11:56:11.951453 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 2 months ago 1.21GB 2025-09-19 11:56:11.951464 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 2 months ago 353MB 2025-09-19 11:56:11.951475 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-19 11:56:11.951486 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 2 months ago 344MB 2025-09-19 11:56:11.951497 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-19 11:56:11.951508 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 2 months ago 324MB 2025-09-19 11:56:11.951518 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 2 months ago 351MB 2025-09-19 11:56:11.951529 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 2 months ago 324MB 2025-09-19 11:56:11.951540 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 2 months ago 590MB 2025-09-19 11:56:11.951555 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 2 months ago 946MB 2025-09-19 11:56:11.951566 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 2 months ago 947MB 2025-09-19 11:56:11.951577 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 2 months ago 947MB 2025-09-19 11:56:11.951588 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 2 months ago 946MB 2025-09-19 11:56:11.951598 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 2 months ago 1.1GB 2025-09-19 11:56:11.951609 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 2 months ago 1.1GB 2025-09-19 11:56:11.951620 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 2 months ago 1.12GB 2025-09-19 11:56:11.951631 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 2 months ago 1.1GB 2025-09-19 11:56:11.951641 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 2 months ago 1.12GB 2025-09-19 11:56:11.951689 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 2 months ago 1.15GB 2025-09-19 11:56:11.951702 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 2 months ago 1.04GB 2025-09-19 11:56:11.951713 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 2 months ago 1.06GB 2025-09-19 11:56:11.951732 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 2 months ago 1.06GB 2025-09-19 11:56:11.951743 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 2 months ago 1.06GB 2025-09-19 11:56:11.951753 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 2 months ago 1.41GB 2025-09-19 11:56:11.951764 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 2 months ago 1.41GB 2025-09-19 11:56:11.951775 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 2 months ago 1.29GB 2025-09-19 11:56:11.951785 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 2 months ago 1.42GB 2025-09-19 11:56:11.951796 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 2 months ago 1.29GB 2025-09-19 11:56:11.951807 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 2 months ago 1.29GB 2025-09-19 11:56:11.951818 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 2 months ago 1.2GB 2025-09-19 11:56:11.951829 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 2 months ago 1.31GB 2025-09-19 11:56:11.951839 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 2 months ago 1.05GB 2025-09-19 11:56:11.951850 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 2 months ago 1.05GB 2025-09-19 11:56:11.951861 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 2 months ago 1.05GB 2025-09-19 11:56:11.951871 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 2 months ago 1.06GB 2025-09-19 11:56:11.951882 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 2 months ago 1.06GB 2025-09-19 11:56:11.951893 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 2 months ago 1.05GB 2025-09-19 11:56:11.951904 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 2 months ago 1.11GB 2025-09-19 11:56:11.951914 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 2 months ago 1.13GB 2025-09-19 11:56:11.951925 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 2 months ago 1.11GB 2025-09-19 11:56:11.951936 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 2 months ago 1.24GB 2025-09-19 11:56:11.951947 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 4 months ago 1.27GB 2025-09-19 11:56:12.223286 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 11:56:12.224126 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 11:56:12.287447 | orchestrator | 2025-09-19 11:56:12.287518 | orchestrator | ## Containers @ testbed-node-2 2025-09-19 11:56:12.287526 | orchestrator | 2025-09-19 11:56:12.287533 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 11:56:12.287540 | orchestrator | + echo 2025-09-19 11:56:12.287573 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-09-19 11:56:12.287581 | orchestrator | + echo 2025-09-19 11:56:12.287588 | orchestrator | + osism container testbed-node-2 ps 2025-09-19 11:56:14.584633 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 11:56:14.585094 | orchestrator | e42815a92b33 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-09-19 11:56:14.585164 | orchestrator | 9d18cb6e69be registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-09-19 11:56:14.585186 | orchestrator | a76c5c0577c3 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-09-19 11:56:14.585207 | orchestrator | 8fc4804d46f0 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-09-19 11:56:14.585229 | orchestrator | e5e6d9e35df2 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-09-19 11:56:14.585246 | orchestrator | 5d120550ebcb registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-09-19 11:56:14.585264 | orchestrator | e7e258e9b89b registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-09-19 11:56:14.585282 | orchestrator | 9157110107b1 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-09-19 11:56:14.585299 | orchestrator | a84099f1b525 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-09-19 11:56:14.585316 | orchestrator | b1cf0efa498c registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-09-19 11:56:14.585332 | orchestrator | eab529e20af5 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-09-19 11:56:14.585347 | orchestrator | 77f8dbeb5274 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-09-19 11:56:14.585365 | orchestrator | 4865575c64f8 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-09-19 11:56:14.585381 | orchestrator | 5ed6d0eb2da8 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-09-19 11:56:14.585398 | orchestrator | 1bb75117daf5 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-09-19 11:56:14.585585 | orchestrator | ecd0976aaf13 registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-09-19 11:56:14.585613 | orchestrator | b0c388a03382 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-09-19 11:56:14.585631 | orchestrator | ed74ade60be5 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-09-19 11:56:14.585648 | orchestrator | 9347cb3c452b registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-09-19 11:56:14.585671 | orchestrator | 6a33b66a5818 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-09-19 11:56:14.585681 | orchestrator | 1d438f7e604c registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-19 11:56:14.585729 | orchestrator | 3d2bd94a9da8 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-09-19 11:56:14.585739 | orchestrator | 2b4b58046486 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-09-19 11:56:14.585749 | orchestrator | 3ce93a735548 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-09-19 11:56:14.585758 | orchestrator | 927fcf1aacf1 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-09-19 11:56:14.585768 | orchestrator | 056e242871cb registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-09-19 11:56:14.585778 | orchestrator | 91511f502a60 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-09-19 11:56:14.585790 | orchestrator | 6932c676369b registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-09-19 11:56:14.585800 | orchestrator | 95076e4d1429 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-09-19 11:56:14.585810 | orchestrator | 79324e9126fe registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-09-19 11:56:14.585835 | orchestrator | edac96332de4 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-09-19 11:56:14.585845 | orchestrator | 1bbe85d8a760 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-09-19 11:56:14.585855 | orchestrator | 74921fd88d05 registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-09-19 11:56:14.585864 | orchestrator | 8e9bd4f643d6 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-09-19 11:56:14.585874 | orchestrator | 8e2c71a9fc96 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-09-19 11:56:14.585883 | orchestrator | 8f597b0da865 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-09-19 11:56:14.585905 | orchestrator | c2569d51aaa2 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-09-19 11:56:14.585921 | orchestrator | 671c1762768c registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-09-19 11:56:14.585931 | orchestrator | e9c412d9d247 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-09-19 11:56:14.585946 | orchestrator | 588478ae4976 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-09-19 11:56:14.585956 | orchestrator | c3176cf6548c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2025-09-19 11:56:14.585966 | orchestrator | f1fd754ee503 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-09-19 11:56:14.585975 | orchestrator | 20ee309b6968 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-09-19 11:56:14.585985 | orchestrator | c5a924f99e7f registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-09-19 11:56:14.585994 | orchestrator | a9adcc218892 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-09-19 11:56:14.586004 | orchestrator | 4953e6e36148 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-09-19 11:56:14.586068 | orchestrator | 51c1785b349c registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-09-19 11:56:14.586082 | orchestrator | 6bb6f7f11d21 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-09-19 11:56:14.586091 | orchestrator | 6130b7b77df6 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2025-09-19 11:56:14.586101 | orchestrator | f4eb4464c865 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-09-19 11:56:14.586110 | orchestrator | faf56763e0f2 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-09-19 11:56:14.586120 | orchestrator | e920b81d340f registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-09-19 11:56:14.586129 | orchestrator | e6a2c636efc0 registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-09-19 11:56:14.586139 | orchestrator | 9d859622158d registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-09-19 11:56:14.586148 | orchestrator | 087ef485f9b6 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-19 11:56:14.586158 | orchestrator | 3599ddc036a5 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-09-19 11:56:14.586174 | orchestrator | e384310189e4 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-09-19 11:56:14.854052 | orchestrator | 2025-09-19 11:56:14.854139 | orchestrator | ## Images @ testbed-node-2 2025-09-19 11:56:14.854151 | orchestrator | 2025-09-19 11:56:14.854159 | orchestrator | + echo 2025-09-19 11:56:14.854166 | orchestrator | + echo '## Images @ testbed-node-2' 2025-09-19 11:56:14.854174 | orchestrator | + echo 2025-09-19 11:56:14.854181 | orchestrator | + osism container testbed-node-2 images 2025-09-19 11:56:17.063656 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 11:56:17.063773 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 2 months ago 628MB 2025-09-19 11:56:17.063787 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 2 months ago 329MB 2025-09-19 11:56:17.063797 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 2 months ago 326MB 2025-09-19 11:56:17.063806 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 2 months ago 1.59GB 2025-09-19 11:56:17.063816 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 2 months ago 1.55GB 2025-09-19 11:56:17.063825 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 2 months ago 417MB 2025-09-19 11:56:17.063835 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 2 months ago 318MB 2025-09-19 11:56:17.063844 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 2 months ago 375MB 2025-09-19 11:56:17.063853 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 2 months ago 746MB 2025-09-19 11:56:17.063863 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 2 months ago 1.01GB 2025-09-19 11:56:17.063872 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 2 months ago 318MB 2025-09-19 11:56:17.063878 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 2 months ago 361MB 2025-09-19 11:56:17.063883 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 2 months ago 361MB 2025-09-19 11:56:17.063889 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 2 months ago 1.21GB 2025-09-19 11:56:17.063917 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 2 months ago 353MB 2025-09-19 11:56:17.063927 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 2 months ago 410MB 2025-09-19 11:56:17.063937 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 2 months ago 344MB 2025-09-19 11:56:17.063946 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 2 months ago 358MB 2025-09-19 11:56:17.063955 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 2 months ago 324MB 2025-09-19 11:56:17.063964 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 2 months ago 351MB 2025-09-19 11:56:17.063974 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 2 months ago 324MB 2025-09-19 11:56:17.064004 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 2 months ago 590MB 2025-09-19 11:56:17.064013 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 2 months ago 946MB 2025-09-19 11:56:17.064022 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 2 months ago 947MB 2025-09-19 11:56:17.064031 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 2 months ago 947MB 2025-09-19 11:56:17.064040 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 2 months ago 946MB 2025-09-19 11:56:17.064049 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 2 months ago 1.1GB 2025-09-19 11:56:17.064059 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 2 months ago 1.1GB 2025-09-19 11:56:17.064067 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 2 months ago 1.12GB 2025-09-19 11:56:17.064076 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 2 months ago 1.1GB 2025-09-19 11:56:17.064085 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 2 months ago 1.12GB 2025-09-19 11:56:17.064111 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 2 months ago 1.15GB 2025-09-19 11:56:17.064120 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 2 months ago 1.04GB 2025-09-19 11:56:17.064129 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 2 months ago 1.06GB 2025-09-19 11:56:17.064138 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 2 months ago 1.06GB 2025-09-19 11:56:17.064148 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 2 months ago 1.06GB 2025-09-19 11:56:17.064161 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 2 months ago 1.41GB 2025-09-19 11:56:17.064170 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 2 months ago 1.41GB 2025-09-19 11:56:17.064179 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 2 months ago 1.29GB 2025-09-19 11:56:17.064188 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 2 months ago 1.42GB 2025-09-19 11:56:17.064197 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 2 months ago 1.29GB 2025-09-19 11:56:17.064205 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 2 months ago 1.29GB 2025-09-19 11:56:17.064214 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 2 months ago 1.2GB 2025-09-19 11:56:17.064223 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 2 months ago 1.31GB 2025-09-19 11:56:17.064231 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 2 months ago 1.05GB 2025-09-19 11:56:17.064240 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 2 months ago 1.05GB 2025-09-19 11:56:17.064250 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 2 months ago 1.05GB 2025-09-19 11:56:17.064267 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 2 months ago 1.06GB 2025-09-19 11:56:17.064276 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 2 months ago 1.06GB 2025-09-19 11:56:17.064285 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 2 months ago 1.05GB 2025-09-19 11:56:17.064295 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 2 months ago 1.11GB 2025-09-19 11:56:17.064304 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 2 months ago 1.13GB 2025-09-19 11:56:17.064313 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 2 months ago 1.11GB 2025-09-19 11:56:17.064322 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 2 months ago 1.24GB 2025-09-19 11:56:17.064332 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 4 months ago 1.27GB 2025-09-19 11:56:17.369088 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-09-19 11:56:17.373810 | orchestrator | + set -e 2025-09-19 11:56:17.373838 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 11:56:17.374544 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 11:56:17.374559 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 11:56:17.374567 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 11:56:17.374574 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 11:56:17.374582 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 11:56:17.374591 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 11:56:17.374598 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 11:56:17.374606 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 11:56:17.374613 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 11:56:17.374621 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 11:56:17.374628 | orchestrator | ++ export ARA=false 2025-09-19 11:56:17.374635 | orchestrator | ++ ARA=false 2025-09-19 11:56:17.374643 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 11:56:17.374650 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 11:56:17.374657 | orchestrator | ++ export TEMPEST=false 2025-09-19 11:56:17.374664 | orchestrator | ++ TEMPEST=false 2025-09-19 11:56:17.374671 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 11:56:17.374682 | orchestrator | ++ IS_ZUUL=true 2025-09-19 11:56:17.374689 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-09-19 11:56:17.374697 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-09-19 11:56:17.374704 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 11:56:17.374711 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 11:56:17.374739 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 11:56:17.374746 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 11:56:17.374753 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 11:56:17.374761 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 11:56:17.374768 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 11:56:17.374775 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 11:56:17.374782 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 11:56:17.374790 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-09-19 11:56:17.384248 | orchestrator | + set -e 2025-09-19 11:56:17.384267 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 11:56:17.384329 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 11:56:17.384337 | orchestrator | ++ INTERACTIVE=false 2025-09-19 11:56:17.384344 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 11:56:17.384351 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 11:56:17.384362 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 11:56:17.386162 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 11:56:17.389532 | orchestrator | 2025-09-19 11:56:17.389548 | orchestrator | # Ceph status 2025-09-19 11:56:17.389556 | orchestrator | 2025-09-19 11:56:17.389564 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 11:56:17.389571 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 11:56:17.389579 | orchestrator | + echo 2025-09-19 11:56:17.389587 | orchestrator | + echo '# Ceph status' 2025-09-19 11:56:17.389595 | orchestrator | + echo 2025-09-19 11:56:17.389602 | orchestrator | + ceph -s 2025-09-19 11:56:17.957043 | orchestrator | cluster: 2025-09-19 11:56:17.957144 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-09-19 11:56:17.957160 | orchestrator | health: HEALTH_OK 2025-09-19 11:56:17.957172 | orchestrator | 2025-09-19 11:56:17.957183 | orchestrator | services: 2025-09-19 11:56:17.957195 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-09-19 11:56:17.957207 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-2, testbed-node-1 2025-09-19 11:56:17.957219 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-09-19 11:56:17.957231 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 24m) 2025-09-19 11:56:17.957242 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-09-19 11:56:17.957253 | orchestrator | 2025-09-19 11:56:17.957264 | orchestrator | data: 2025-09-19 11:56:17.957275 | orchestrator | volumes: 1/1 healthy 2025-09-19 11:56:17.957286 | orchestrator | pools: 14 pools, 401 pgs 2025-09-19 11:56:17.957297 | orchestrator | objects: 524 objects, 2.2 GiB 2025-09-19 11:56:17.957308 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-09-19 11:56:17.957319 | orchestrator | pgs: 401 active+clean 2025-09-19 11:56:17.957330 | orchestrator | 2025-09-19 11:56:18.006923 | orchestrator | 2025-09-19 11:56:18.006986 | orchestrator | # Ceph versions 2025-09-19 11:56:18.006998 | orchestrator | 2025-09-19 11:56:18.007009 | orchestrator | + echo 2025-09-19 11:56:18.007020 | orchestrator | + echo '# Ceph versions' 2025-09-19 11:56:18.007032 | orchestrator | + echo 2025-09-19 11:56:18.007044 | orchestrator | + ceph versions 2025-09-19 11:56:18.594995 | orchestrator | { 2025-09-19 11:56:18.595089 | orchestrator | "mon": { 2025-09-19 11:56:18.595104 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 11:56:18.595117 | orchestrator | }, 2025-09-19 11:56:18.595129 | orchestrator | "mgr": { 2025-09-19 11:56:18.595140 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 11:56:18.595151 | orchestrator | }, 2025-09-19 11:56:18.595161 | orchestrator | "osd": { 2025-09-19 11:56:18.595173 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-09-19 11:56:18.595183 | orchestrator | }, 2025-09-19 11:56:18.595194 | orchestrator | "mds": { 2025-09-19 11:56:18.595205 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 11:56:18.595215 | orchestrator | }, 2025-09-19 11:56:18.595226 | orchestrator | "rgw": { 2025-09-19 11:56:18.595237 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 11:56:18.595247 | orchestrator | }, 2025-09-19 11:56:18.595258 | orchestrator | "overall": { 2025-09-19 11:56:18.595269 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-09-19 11:56:18.595281 | orchestrator | } 2025-09-19 11:56:18.595292 | orchestrator | } 2025-09-19 11:56:18.646565 | orchestrator | 2025-09-19 11:56:18.646630 | orchestrator | # Ceph OSD tree 2025-09-19 11:56:18.646643 | orchestrator | 2025-09-19 11:56:18.646655 | orchestrator | + echo 2025-09-19 11:56:18.646667 | orchestrator | + echo '# Ceph OSD tree' 2025-09-19 11:56:18.646679 | orchestrator | + echo 2025-09-19 11:56:18.646691 | orchestrator | + ceph osd df tree 2025-09-19 11:56:19.169357 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-09-19 11:56:19.169462 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-09-19 11:56:19.169477 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-09-19 11:56:19.169489 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.22 1.05 199 up osd.0 2025-09-19 11:56:19.169501 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.62 0.95 193 up osd.5 2025-09-19 11:56:19.169513 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-09-19 11:56:19.169524 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.24 1.05 192 up osd.1 2025-09-19 11:56:19.169535 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.60 0.95 196 up osd.4 2025-09-19 11:56:19.169568 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-09-19 11:56:19.169580 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.0 GiB 955 MiB 1 KiB 70 MiB 19 GiB 5.01 0.85 196 up osd.2 2025-09-19 11:56:19.169591 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.83 1.15 194 up osd.3 2025-09-19 11:56:19.169602 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-09-19 11:56:19.169613 | orchestrator | MIN/MAX VAR: 0.85/1.15 STDDEV: 0.58 2025-09-19 11:56:19.214123 | orchestrator | 2025-09-19 11:56:19.214235 | orchestrator | # Ceph monitor status 2025-09-19 11:56:19.214259 | orchestrator | 2025-09-19 11:56:19.214279 | orchestrator | + echo 2025-09-19 11:56:19.214299 | orchestrator | + echo '# Ceph monitor status' 2025-09-19 11:56:19.214318 | orchestrator | + echo 2025-09-19 11:56:19.214337 | orchestrator | + ceph mon stat 2025-09-19 11:56:19.781817 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-09-19 11:56:19.826381 | orchestrator | 2025-09-19 11:56:19.826481 | orchestrator | # Ceph quorum status 2025-09-19 11:56:19.826497 | orchestrator | 2025-09-19 11:56:19.826509 | orchestrator | + echo 2025-09-19 11:56:19.826521 | orchestrator | + echo '# Ceph quorum status' 2025-09-19 11:56:19.826532 | orchestrator | + echo 2025-09-19 11:56:19.826543 | orchestrator | + ceph quorum_status 2025-09-19 11:56:19.826910 | orchestrator | + jq 2025-09-19 11:56:20.521301 | orchestrator | { 2025-09-19 11:56:20.521399 | orchestrator | "election_epoch": 4, 2025-09-19 11:56:20.521413 | orchestrator | "quorum": [ 2025-09-19 11:56:20.521426 | orchestrator | 0, 2025-09-19 11:56:20.521437 | orchestrator | 1, 2025-09-19 11:56:20.521448 | orchestrator | 2 2025-09-19 11:56:20.521459 | orchestrator | ], 2025-09-19 11:56:20.521470 | orchestrator | "quorum_names": [ 2025-09-19 11:56:20.521481 | orchestrator | "testbed-node-0", 2025-09-19 11:56:20.521492 | orchestrator | "testbed-node-1", 2025-09-19 11:56:20.521503 | orchestrator | "testbed-node-2" 2025-09-19 11:56:20.521514 | orchestrator | ], 2025-09-19 11:56:20.521526 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-09-19 11:56:20.521538 | orchestrator | "quorum_age": 1629, 2025-09-19 11:56:20.521549 | orchestrator | "features": { 2025-09-19 11:56:20.521560 | orchestrator | "quorum_con": "4540138322906710015", 2025-09-19 11:56:20.521571 | orchestrator | "quorum_mon": [ 2025-09-19 11:56:20.521582 | orchestrator | "kraken", 2025-09-19 11:56:20.521593 | orchestrator | "luminous", 2025-09-19 11:56:20.521604 | orchestrator | "mimic", 2025-09-19 11:56:20.521615 | orchestrator | "osdmap-prune", 2025-09-19 11:56:20.521625 | orchestrator | "nautilus", 2025-09-19 11:56:20.521636 | orchestrator | "octopus", 2025-09-19 11:56:20.521647 | orchestrator | "pacific", 2025-09-19 11:56:20.521658 | orchestrator | "elector-pinging", 2025-09-19 11:56:20.521669 | orchestrator | "quincy", 2025-09-19 11:56:20.521679 | orchestrator | "reef" 2025-09-19 11:56:20.521691 | orchestrator | ] 2025-09-19 11:56:20.521702 | orchestrator | }, 2025-09-19 11:56:20.521713 | orchestrator | "monmap": { 2025-09-19 11:56:20.521724 | orchestrator | "epoch": 1, 2025-09-19 11:56:20.521735 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-09-19 11:56:20.521768 | orchestrator | "modified": "2025-09-19T11:28:59.829213Z", 2025-09-19 11:56:20.521780 | orchestrator | "created": "2025-09-19T11:28:59.829213Z", 2025-09-19 11:56:20.521791 | orchestrator | "min_mon_release": 18, 2025-09-19 11:56:20.521802 | orchestrator | "min_mon_release_name": "reef", 2025-09-19 11:56:20.521813 | orchestrator | "election_strategy": 1, 2025-09-19 11:56:20.521824 | orchestrator | "disallowed_leaders: ": "", 2025-09-19 11:56:20.521835 | orchestrator | "stretch_mode": false, 2025-09-19 11:56:20.521862 | orchestrator | "tiebreaker_mon": "", 2025-09-19 11:56:20.521874 | orchestrator | "removed_ranks: ": "", 2025-09-19 11:56:20.521886 | orchestrator | "features": { 2025-09-19 11:56:20.521898 | orchestrator | "persistent": [ 2025-09-19 11:56:20.521910 | orchestrator | "kraken", 2025-09-19 11:56:20.521922 | orchestrator | "luminous", 2025-09-19 11:56:20.521934 | orchestrator | "mimic", 2025-09-19 11:56:20.521966 | orchestrator | "osdmap-prune", 2025-09-19 11:56:20.521979 | orchestrator | "nautilus", 2025-09-19 11:56:20.521991 | orchestrator | "octopus", 2025-09-19 11:56:20.522003 | orchestrator | "pacific", 2025-09-19 11:56:20.522065 | orchestrator | "elector-pinging", 2025-09-19 11:56:20.522079 | orchestrator | "quincy", 2025-09-19 11:56:20.522091 | orchestrator | "reef" 2025-09-19 11:56:20.522103 | orchestrator | ], 2025-09-19 11:56:20.522115 | orchestrator | "optional": [] 2025-09-19 11:56:20.522127 | orchestrator | }, 2025-09-19 11:56:20.522139 | orchestrator | "mons": [ 2025-09-19 11:56:20.522151 | orchestrator | { 2025-09-19 11:56:20.522163 | orchestrator | "rank": 0, 2025-09-19 11:56:20.522176 | orchestrator | "name": "testbed-node-0", 2025-09-19 11:56:20.522188 | orchestrator | "public_addrs": { 2025-09-19 11:56:20.522199 | orchestrator | "addrvec": [ 2025-09-19 11:56:20.522211 | orchestrator | { 2025-09-19 11:56:20.522223 | orchestrator | "type": "v2", 2025-09-19 11:56:20.522235 | orchestrator | "addr": "192.168.16.10:3300", 2025-09-19 11:56:20.522247 | orchestrator | "nonce": 0 2025-09-19 11:56:20.522259 | orchestrator | }, 2025-09-19 11:56:20.522271 | orchestrator | { 2025-09-19 11:56:20.522283 | orchestrator | "type": "v1", 2025-09-19 11:56:20.522295 | orchestrator | "addr": "192.168.16.10:6789", 2025-09-19 11:56:20.522307 | orchestrator | "nonce": 0 2025-09-19 11:56:20.522319 | orchestrator | } 2025-09-19 11:56:20.522330 | orchestrator | ] 2025-09-19 11:56:20.522340 | orchestrator | }, 2025-09-19 11:56:20.522351 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-09-19 11:56:20.522362 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-09-19 11:56:20.522373 | orchestrator | "priority": 0, 2025-09-19 11:56:20.522384 | orchestrator | "weight": 0, 2025-09-19 11:56:20.522394 | orchestrator | "crush_location": "{}" 2025-09-19 11:56:20.522405 | orchestrator | }, 2025-09-19 11:56:20.522416 | orchestrator | { 2025-09-19 11:56:20.522426 | orchestrator | "rank": 1, 2025-09-19 11:56:20.522437 | orchestrator | "name": "testbed-node-1", 2025-09-19 11:56:20.522448 | orchestrator | "public_addrs": { 2025-09-19 11:56:20.522459 | orchestrator | "addrvec": [ 2025-09-19 11:56:20.522469 | orchestrator | { 2025-09-19 11:56:20.522480 | orchestrator | "type": "v2", 2025-09-19 11:56:20.522491 | orchestrator | "addr": "192.168.16.11:3300", 2025-09-19 11:56:20.522501 | orchestrator | "nonce": 0 2025-09-19 11:56:20.522512 | orchestrator | }, 2025-09-19 11:56:20.522523 | orchestrator | { 2025-09-19 11:56:20.522534 | orchestrator | "type": "v1", 2025-09-19 11:56:20.522544 | orchestrator | "addr": "192.168.16.11:6789", 2025-09-19 11:56:20.522555 | orchestrator | "nonce": 0 2025-09-19 11:56:20.522566 | orchestrator | } 2025-09-19 11:56:20.522576 | orchestrator | ] 2025-09-19 11:56:20.522587 | orchestrator | }, 2025-09-19 11:56:20.522598 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-09-19 11:56:20.522609 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-09-19 11:56:20.522619 | orchestrator | "priority": 0, 2025-09-19 11:56:20.522630 | orchestrator | "weight": 0, 2025-09-19 11:56:20.522641 | orchestrator | "crush_location": "{}" 2025-09-19 11:56:20.522651 | orchestrator | }, 2025-09-19 11:56:20.522662 | orchestrator | { 2025-09-19 11:56:20.522673 | orchestrator | "rank": 2, 2025-09-19 11:56:20.522683 | orchestrator | "name": "testbed-node-2", 2025-09-19 11:56:20.522694 | orchestrator | "public_addrs": { 2025-09-19 11:56:20.522705 | orchestrator | "addrvec": [ 2025-09-19 11:56:20.522715 | orchestrator | { 2025-09-19 11:56:20.522726 | orchestrator | "type": "v2", 2025-09-19 11:56:20.522737 | orchestrator | "addr": "192.168.16.12:3300", 2025-09-19 11:56:20.522778 | orchestrator | "nonce": 0 2025-09-19 11:56:20.522789 | orchestrator | }, 2025-09-19 11:56:20.522800 | orchestrator | { 2025-09-19 11:56:20.522811 | orchestrator | "type": "v1", 2025-09-19 11:56:20.522822 | orchestrator | "addr": "192.168.16.12:6789", 2025-09-19 11:56:20.522833 | orchestrator | "nonce": 0 2025-09-19 11:56:20.522843 | orchestrator | } 2025-09-19 11:56:20.522854 | orchestrator | ] 2025-09-19 11:56:20.522865 | orchestrator | }, 2025-09-19 11:56:20.522876 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-09-19 11:56:20.522887 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-09-19 11:56:20.522897 | orchestrator | "priority": 0, 2025-09-19 11:56:20.522908 | orchestrator | "weight": 0, 2025-09-19 11:56:20.522928 | orchestrator | "crush_location": "{}" 2025-09-19 11:56:20.522938 | orchestrator | } 2025-09-19 11:56:20.522949 | orchestrator | ] 2025-09-19 11:56:20.522960 | orchestrator | } 2025-09-19 11:56:20.522971 | orchestrator | } 2025-09-19 11:56:20.522982 | orchestrator | 2025-09-19 11:56:20.522993 | orchestrator | # Ceph free space status 2025-09-19 11:56:20.523004 | orchestrator | 2025-09-19 11:56:20.523015 | orchestrator | + echo 2025-09-19 11:56:20.523026 | orchestrator | + echo '# Ceph free space status' 2025-09-19 11:56:20.523037 | orchestrator | + echo 2025-09-19 11:56:20.523048 | orchestrator | + ceph df 2025-09-19 11:56:21.098501 | orchestrator | --- RAW STORAGE --- 2025-09-19 11:56:21.098601 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-09-19 11:56:21.098617 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-19 11:56:21.098629 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-19 11:56:21.098640 | orchestrator | 2025-09-19 11:56:21.098652 | orchestrator | --- POOLS --- 2025-09-19 11:56:21.098664 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-09-19 11:56:21.098677 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-09-19 11:56:21.098688 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-09-19 11:56:21.098699 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-09-19 11:56:21.098710 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-09-19 11:56:21.098721 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-09-19 11:56:21.098731 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-09-19 11:56:21.098742 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-09-19 11:56:21.098808 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-09-19 11:56:21.098822 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-09-19 11:56:21.098833 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 11:56:21.098843 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 11:56:21.098854 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.92 35 GiB 2025-09-19 11:56:21.098864 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 11:56:21.098875 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 11:56:21.140219 | orchestrator | ++ semver 9.2.0 5.0.0 2025-09-19 11:56:21.200193 | orchestrator | + [[ 1 -eq -1 ]] 2025-09-19 11:56:21.200300 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-09-19 11:56:21.200316 | orchestrator | + osism apply facts 2025-09-19 11:56:33.242802 | orchestrator | 2025-09-19 11:56:33 | INFO  | Task 8fdf6b09-0f6a-41ca-8b4e-4321fc5e5547 (facts) was prepared for execution. 2025-09-19 11:56:33.242952 | orchestrator | 2025-09-19 11:56:33 | INFO  | It takes a moment until task 8fdf6b09-0f6a-41ca-8b4e-4321fc5e5547 (facts) has been started and output is visible here. 2025-09-19 11:56:45.998912 | orchestrator | 2025-09-19 11:56:45.999052 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 11:56:45.999071 | orchestrator | 2025-09-19 11:56:45.999084 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 11:56:45.999095 | orchestrator | Friday 19 September 2025 11:56:37 +0000 (0:00:00.280) 0:00:00.280 ****** 2025-09-19 11:56:45.999107 | orchestrator | ok: [testbed-manager] 2025-09-19 11:56:45.999120 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:56:45.999131 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:56:45.999142 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:56:45.999153 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:56:45.999164 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:56:45.999175 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:56:45.999186 | orchestrator | 2025-09-19 11:56:45.999197 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 11:56:45.999207 | orchestrator | Friday 19 September 2025 11:56:38 +0000 (0:00:01.458) 0:00:01.738 ****** 2025-09-19 11:56:45.999244 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:56:45.999257 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:56:45.999268 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:56:45.999279 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:56:45.999290 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:56:45.999301 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:56:45.999312 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:56:45.999322 | orchestrator | 2025-09-19 11:56:45.999333 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 11:56:45.999344 | orchestrator | 2025-09-19 11:56:45.999355 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 11:56:45.999366 | orchestrator | Friday 19 September 2025 11:56:40 +0000 (0:00:01.292) 0:00:03.031 ****** 2025-09-19 11:56:45.999377 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:56:45.999388 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:56:45.999398 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:56:45.999409 | orchestrator | ok: [testbed-manager] 2025-09-19 11:56:45.999420 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:56:45.999431 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:56:45.999442 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:56:45.999453 | orchestrator | 2025-09-19 11:56:45.999465 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 11:56:45.999478 | orchestrator | 2025-09-19 11:56:45.999490 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 11:56:45.999502 | orchestrator | Friday 19 September 2025 11:56:45 +0000 (0:00:04.848) 0:00:07.879 ****** 2025-09-19 11:56:45.999514 | orchestrator | skipping: [testbed-manager] 2025-09-19 11:56:45.999526 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:56:45.999538 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:56:45.999550 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:56:45.999562 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:56:45.999573 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:56:45.999586 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:56:45.999598 | orchestrator | 2025-09-19 11:56:45.999610 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:56:45.999623 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:56:45.999653 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:56:45.999666 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:56:45.999680 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:56:45.999691 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:56:45.999703 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:56:45.999716 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:56:45.999727 | orchestrator | 2025-09-19 11:56:45.999740 | orchestrator | 2025-09-19 11:56:45.999752 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:56:45.999764 | orchestrator | Friday 19 September 2025 11:56:45 +0000 (0:00:00.541) 0:00:08.420 ****** 2025-09-19 11:56:45.999776 | orchestrator | =============================================================================== 2025-09-19 11:56:45.999789 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.85s 2025-09-19 11:56:45.999810 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.46s 2025-09-19 11:56:45.999924 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.29s 2025-09-19 11:56:45.999936 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-09-19 11:56:46.285550 | orchestrator | + osism validate ceph-mons 2025-09-19 11:57:17.786792 | orchestrator | 2025-09-19 11:57:17.786928 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-09-19 11:57:17.786944 | orchestrator | 2025-09-19 11:57:17.786956 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 11:57:17.786967 | orchestrator | Friday 19 September 2025 11:57:02 +0000 (0:00:00.428) 0:00:00.428 ****** 2025-09-19 11:57:17.786979 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:17.786990 | orchestrator | 2025-09-19 11:57:17.787002 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 11:57:17.787013 | orchestrator | Friday 19 September 2025 11:57:03 +0000 (0:00:00.626) 0:00:01.055 ****** 2025-09-19 11:57:17.787024 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:17.787035 | orchestrator | 2025-09-19 11:57:17.787046 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 11:57:17.787078 | orchestrator | Friday 19 September 2025 11:57:04 +0000 (0:00:00.831) 0:00:01.886 ****** 2025-09-19 11:57:17.787090 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.787102 | orchestrator | 2025-09-19 11:57:17.787113 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-19 11:57:17.787124 | orchestrator | Friday 19 September 2025 11:57:04 +0000 (0:00:00.240) 0:00:02.127 ****** 2025-09-19 11:57:17.787135 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.787146 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:57:17.787157 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:57:17.787168 | orchestrator | 2025-09-19 11:57:17.787179 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-19 11:57:17.787189 | orchestrator | Friday 19 September 2025 11:57:04 +0000 (0:00:00.291) 0:00:02.419 ****** 2025-09-19 11:57:17.787200 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:57:17.787211 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.787222 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:57:17.787232 | orchestrator | 2025-09-19 11:57:17.787243 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-19 11:57:17.787254 | orchestrator | Friday 19 September 2025 11:57:05 +0000 (0:00:01.017) 0:00:03.437 ****** 2025-09-19 11:57:17.787295 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:17.787308 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:57:17.787320 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:57:17.787332 | orchestrator | 2025-09-19 11:57:17.787344 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-19 11:57:17.787356 | orchestrator | Friday 19 September 2025 11:57:05 +0000 (0:00:00.274) 0:00:03.711 ****** 2025-09-19 11:57:17.787368 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.787380 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:57:17.787392 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:57:17.787404 | orchestrator | 2025-09-19 11:57:17.787416 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 11:57:17.787428 | orchestrator | Friday 19 September 2025 11:57:06 +0000 (0:00:00.471) 0:00:04.182 ****** 2025-09-19 11:57:17.787440 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.787452 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:57:17.787464 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:57:17.787476 | orchestrator | 2025-09-19 11:57:17.787488 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-09-19 11:57:17.787501 | orchestrator | Friday 19 September 2025 11:57:06 +0000 (0:00:00.304) 0:00:04.488 ****** 2025-09-19 11:57:17.787513 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:17.787525 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:57:17.787562 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:57:17.787576 | orchestrator | 2025-09-19 11:57:17.787588 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-09-19 11:57:17.787600 | orchestrator | Friday 19 September 2025 11:57:06 +0000 (0:00:00.282) 0:00:04.770 ****** 2025-09-19 11:57:17.787612 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.787624 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:57:17.787636 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:57:17.787647 | orchestrator | 2025-09-19 11:57:17.787658 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 11:57:17.787669 | orchestrator | Friday 19 September 2025 11:57:07 +0000 (0:00:00.311) 0:00:05.082 ****** 2025-09-19 11:57:17.787680 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:17.787690 | orchestrator | 2025-09-19 11:57:17.787701 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 11:57:17.787712 | orchestrator | Friday 19 September 2025 11:57:07 +0000 (0:00:00.651) 0:00:05.734 ****** 2025-09-19 11:57:17.787723 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:17.787733 | orchestrator | 2025-09-19 11:57:17.787744 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 11:57:17.787754 | orchestrator | Friday 19 September 2025 11:57:08 +0000 (0:00:00.249) 0:00:05.983 ****** 2025-09-19 11:57:17.787765 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:17.787776 | orchestrator | 2025-09-19 11:57:17.787786 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:57:17.787797 | orchestrator | Friday 19 September 2025 11:57:08 +0000 (0:00:00.248) 0:00:06.232 ****** 2025-09-19 11:57:17.787808 | orchestrator | 2025-09-19 11:57:17.787818 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:57:17.787829 | orchestrator | Friday 19 September 2025 11:57:08 +0000 (0:00:00.068) 0:00:06.300 ****** 2025-09-19 11:57:17.787839 | orchestrator | 2025-09-19 11:57:17.787850 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:57:17.787861 | orchestrator | Friday 19 September 2025 11:57:08 +0000 (0:00:00.068) 0:00:06.369 ****** 2025-09-19 11:57:17.787871 | orchestrator | 2025-09-19 11:57:17.787882 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 11:57:17.787892 | orchestrator | Friday 19 September 2025 11:57:08 +0000 (0:00:00.072) 0:00:06.442 ****** 2025-09-19 11:57:17.787903 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:17.787914 | orchestrator | 2025-09-19 11:57:17.787924 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-19 11:57:17.787936 | orchestrator | Friday 19 September 2025 11:57:08 +0000 (0:00:00.246) 0:00:06.688 ****** 2025-09-19 11:57:17.787947 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:17.787958 | orchestrator | 2025-09-19 11:57:17.787988 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-09-19 11:57:17.788000 | orchestrator | Friday 19 September 2025 11:57:09 +0000 (0:00:00.252) 0:00:06.940 ****** 2025-09-19 11:57:17.788011 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.788022 | orchestrator | 2025-09-19 11:57:17.788033 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-09-19 11:57:17.788044 | orchestrator | Friday 19 September 2025 11:57:09 +0000 (0:00:00.111) 0:00:07.052 ****** 2025-09-19 11:57:17.788054 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:57:17.788065 | orchestrator | 2025-09-19 11:57:17.788076 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-09-19 11:57:17.788086 | orchestrator | Friday 19 September 2025 11:57:10 +0000 (0:00:01.643) 0:00:08.696 ****** 2025-09-19 11:57:17.788097 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.788108 | orchestrator | 2025-09-19 11:57:17.788119 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-09-19 11:57:17.788130 | orchestrator | Friday 19 September 2025 11:57:11 +0000 (0:00:00.338) 0:00:09.034 ****** 2025-09-19 11:57:17.788140 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:17.788159 | orchestrator | 2025-09-19 11:57:17.788170 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-09-19 11:57:17.788180 | orchestrator | Friday 19 September 2025 11:57:11 +0000 (0:00:00.318) 0:00:09.353 ****** 2025-09-19 11:57:17.788191 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.788202 | orchestrator | 2025-09-19 11:57:17.788213 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-09-19 11:57:17.788224 | orchestrator | Friday 19 September 2025 11:57:11 +0000 (0:00:00.320) 0:00:09.673 ****** 2025-09-19 11:57:17.788234 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.788245 | orchestrator | 2025-09-19 11:57:17.788256 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-09-19 11:57:17.788282 | orchestrator | Friday 19 September 2025 11:57:12 +0000 (0:00:00.293) 0:00:09.966 ****** 2025-09-19 11:57:17.788293 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:17.788304 | orchestrator | 2025-09-19 11:57:17.788315 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-09-19 11:57:17.788325 | orchestrator | Friday 19 September 2025 11:57:12 +0000 (0:00:00.119) 0:00:10.086 ****** 2025-09-19 11:57:17.788336 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.788347 | orchestrator | 2025-09-19 11:57:17.788358 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-09-19 11:57:17.788368 | orchestrator | Friday 19 September 2025 11:57:12 +0000 (0:00:00.132) 0:00:10.218 ****** 2025-09-19 11:57:17.788379 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.788390 | orchestrator | 2025-09-19 11:57:17.788400 | orchestrator | TASK [Gather status data] ****************************************************** 2025-09-19 11:57:17.788411 | orchestrator | Friday 19 September 2025 11:57:12 +0000 (0:00:00.121) 0:00:10.340 ****** 2025-09-19 11:57:17.788422 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:57:17.788433 | orchestrator | 2025-09-19 11:57:17.788443 | orchestrator | TASK [Set health test data] **************************************************** 2025-09-19 11:57:17.788454 | orchestrator | Friday 19 September 2025 11:57:13 +0000 (0:00:01.417) 0:00:11.758 ****** 2025-09-19 11:57:17.788465 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.788476 | orchestrator | 2025-09-19 11:57:17.788487 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-09-19 11:57:17.788497 | orchestrator | Friday 19 September 2025 11:57:14 +0000 (0:00:00.304) 0:00:12.063 ****** 2025-09-19 11:57:17.788508 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:17.788519 | orchestrator | 2025-09-19 11:57:17.788530 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-09-19 11:57:17.788540 | orchestrator | Friday 19 September 2025 11:57:14 +0000 (0:00:00.153) 0:00:12.216 ****** 2025-09-19 11:57:17.788551 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:17.788562 | orchestrator | 2025-09-19 11:57:17.788573 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-09-19 11:57:17.788583 | orchestrator | Friday 19 September 2025 11:57:14 +0000 (0:00:00.129) 0:00:12.346 ****** 2025-09-19 11:57:17.788595 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:17.788606 | orchestrator | 2025-09-19 11:57:17.788617 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-09-19 11:57:17.788627 | orchestrator | Friday 19 September 2025 11:57:14 +0000 (0:00:00.127) 0:00:12.473 ****** 2025-09-19 11:57:17.788638 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:17.788649 | orchestrator | 2025-09-19 11:57:17.788659 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 11:57:17.788670 | orchestrator | Friday 19 September 2025 11:57:14 +0000 (0:00:00.323) 0:00:12.797 ****** 2025-09-19 11:57:17.788681 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:17.788692 | orchestrator | 2025-09-19 11:57:17.788703 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 11:57:17.788714 | orchestrator | Friday 19 September 2025 11:57:15 +0000 (0:00:00.265) 0:00:13.062 ****** 2025-09-19 11:57:17.788724 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:17.788742 | orchestrator | 2025-09-19 11:57:17.788753 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 11:57:17.788764 | orchestrator | Friday 19 September 2025 11:57:15 +0000 (0:00:00.255) 0:00:13.318 ****** 2025-09-19 11:57:17.788775 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:17.788786 | orchestrator | 2025-09-19 11:57:17.788796 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 11:57:17.788807 | orchestrator | Friday 19 September 2025 11:57:17 +0000 (0:00:01.578) 0:00:14.896 ****** 2025-09-19 11:57:17.788818 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:17.788829 | orchestrator | 2025-09-19 11:57:17.788839 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 11:57:17.788850 | orchestrator | Friday 19 September 2025 11:57:17 +0000 (0:00:00.269) 0:00:15.165 ****** 2025-09-19 11:57:17.788861 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:17.788872 | orchestrator | 2025-09-19 11:57:17.788889 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:57:20.167355 | orchestrator | Friday 19 September 2025 11:57:17 +0000 (0:00:00.261) 0:00:15.426 ****** 2025-09-19 11:57:20.167462 | orchestrator | 2025-09-19 11:57:20.167478 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:57:20.167490 | orchestrator | Friday 19 September 2025 11:57:17 +0000 (0:00:00.070) 0:00:15.497 ****** 2025-09-19 11:57:20.167501 | orchestrator | 2025-09-19 11:57:20.167518 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:57:20.167529 | orchestrator | Friday 19 September 2025 11:57:17 +0000 (0:00:00.071) 0:00:15.569 ****** 2025-09-19 11:57:20.167540 | orchestrator | 2025-09-19 11:57:20.167551 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 11:57:20.167584 | orchestrator | Friday 19 September 2025 11:57:17 +0000 (0:00:00.072) 0:00:15.641 ****** 2025-09-19 11:57:20.167596 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:20.167607 | orchestrator | 2025-09-19 11:57:20.167622 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 11:57:20.167633 | orchestrator | Friday 19 September 2025 11:57:19 +0000 (0:00:01.507) 0:00:17.149 ****** 2025-09-19 11:57:20.167644 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-19 11:57:20.167655 | orchestrator |  "msg": [ 2025-09-19 11:57:20.167668 | orchestrator |  "Validator run completed.", 2025-09-19 11:57:20.167680 | orchestrator |  "You can find the report file here:", 2025-09-19 11:57:20.167694 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-09-19T11:57:03+00:00-report.json", 2025-09-19 11:57:20.167715 | orchestrator |  "on the following host:", 2025-09-19 11:57:20.167735 | orchestrator |  "testbed-manager" 2025-09-19 11:57:20.167760 | orchestrator |  ] 2025-09-19 11:57:20.167786 | orchestrator | } 2025-09-19 11:57:20.167805 | orchestrator | 2025-09-19 11:57:20.167823 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:57:20.167842 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 11:57:20.167865 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:57:20.167886 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:57:20.167905 | orchestrator | 2025-09-19 11:57:20.167925 | orchestrator | 2025-09-19 11:57:20.167944 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:57:20.167963 | orchestrator | Friday 19 September 2025 11:57:19 +0000 (0:00:00.574) 0:00:17.723 ****** 2025-09-19 11:57:20.167976 | orchestrator | =============================================================================== 2025-09-19 11:57:20.168012 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.64s 2025-09-19 11:57:20.168024 | orchestrator | Aggregate test results step one ----------------------------------------- 1.58s 2025-09-19 11:57:20.168036 | orchestrator | Write report file ------------------------------------------------------- 1.51s 2025-09-19 11:57:20.168048 | orchestrator | Gather status data ------------------------------------------------------ 1.42s 2025-09-19 11:57:20.168061 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2025-09-19 11:57:20.168073 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2025-09-19 11:57:20.168086 | orchestrator | Aggregate test results step one ----------------------------------------- 0.65s 2025-09-19 11:57:20.168098 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-09-19 11:57:20.168108 | orchestrator | Print report file information ------------------------------------------- 0.57s 2025-09-19 11:57:20.168119 | orchestrator | Set test result to passed if container is existing ---------------------- 0.47s 2025-09-19 11:57:20.168130 | orchestrator | Set quorum test data ---------------------------------------------------- 0.34s 2025-09-19 11:57:20.168141 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.32s 2025-09-19 11:57:20.168151 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2025-09-19 11:57:20.168162 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.32s 2025-09-19 11:57:20.168173 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.31s 2025-09-19 11:57:20.168184 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-09-19 11:57:20.168194 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2025-09-19 11:57:20.168205 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.29s 2025-09-19 11:57:20.168216 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-09-19 11:57:20.168227 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.28s 2025-09-19 11:57:20.449119 | orchestrator | + osism validate ceph-mgrs 2025-09-19 11:57:51.507324 | orchestrator | 2025-09-19 11:57:51.507426 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-09-19 11:57:51.507443 | orchestrator | 2025-09-19 11:57:51.507456 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 11:57:51.507468 | orchestrator | Friday 19 September 2025 11:57:36 +0000 (0:00:00.429) 0:00:00.429 ****** 2025-09-19 11:57:51.507479 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:51.507491 | orchestrator | 2025-09-19 11:57:51.507502 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 11:57:51.507512 | orchestrator | Friday 19 September 2025 11:57:37 +0000 (0:00:00.705) 0:00:01.135 ****** 2025-09-19 11:57:51.507523 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:51.507534 | orchestrator | 2025-09-19 11:57:51.507582 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 11:57:51.507594 | orchestrator | Friday 19 September 2025 11:57:38 +0000 (0:00:00.826) 0:00:01.962 ****** 2025-09-19 11:57:51.507605 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:51.507618 | orchestrator | 2025-09-19 11:57:51.507629 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-19 11:57:51.507640 | orchestrator | Friday 19 September 2025 11:57:38 +0000 (0:00:00.247) 0:00:02.209 ****** 2025-09-19 11:57:51.507651 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:51.507662 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:57:51.507673 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:57:51.507684 | orchestrator | 2025-09-19 11:57:51.507695 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-19 11:57:51.507716 | orchestrator | Friday 19 September 2025 11:57:38 +0000 (0:00:00.286) 0:00:02.496 ****** 2025-09-19 11:57:51.507745 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:57:51.507756 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:57:51.507767 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:51.507778 | orchestrator | 2025-09-19 11:57:51.507788 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-19 11:57:51.507799 | orchestrator | Friday 19 September 2025 11:57:39 +0000 (0:00:00.999) 0:00:03.495 ****** 2025-09-19 11:57:51.507810 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:51.507822 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:57:51.507833 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:57:51.507843 | orchestrator | 2025-09-19 11:57:51.507854 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-19 11:57:51.507865 | orchestrator | Friday 19 September 2025 11:57:40 +0000 (0:00:00.324) 0:00:03.820 ****** 2025-09-19 11:57:51.507876 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:51.507886 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:57:51.507897 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:57:51.507907 | orchestrator | 2025-09-19 11:57:51.507918 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 11:57:51.507929 | orchestrator | Friday 19 September 2025 11:57:40 +0000 (0:00:00.463) 0:00:04.283 ****** 2025-09-19 11:57:51.507940 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:51.507950 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:57:51.507961 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:57:51.507972 | orchestrator | 2025-09-19 11:57:51.507983 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-09-19 11:57:51.507993 | orchestrator | Friday 19 September 2025 11:57:40 +0000 (0:00:00.308) 0:00:04.592 ****** 2025-09-19 11:57:51.508004 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:51.508015 | orchestrator | skipping: [testbed-node-1] 2025-09-19 11:57:51.508025 | orchestrator | skipping: [testbed-node-2] 2025-09-19 11:57:51.508036 | orchestrator | 2025-09-19 11:57:51.508047 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-09-19 11:57:51.508057 | orchestrator | Friday 19 September 2025 11:57:41 +0000 (0:00:00.326) 0:00:04.919 ****** 2025-09-19 11:57:51.508068 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:51.508079 | orchestrator | ok: [testbed-node-1] 2025-09-19 11:57:51.508090 | orchestrator | ok: [testbed-node-2] 2025-09-19 11:57:51.508100 | orchestrator | 2025-09-19 11:57:51.508111 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 11:57:51.508121 | orchestrator | Friday 19 September 2025 11:57:41 +0000 (0:00:00.291) 0:00:05.210 ****** 2025-09-19 11:57:51.508132 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:51.508143 | orchestrator | 2025-09-19 11:57:51.508153 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 11:57:51.508164 | orchestrator | Friday 19 September 2025 11:57:42 +0000 (0:00:00.666) 0:00:05.877 ****** 2025-09-19 11:57:51.508175 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:51.508186 | orchestrator | 2025-09-19 11:57:51.508196 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 11:57:51.508207 | orchestrator | Friday 19 September 2025 11:57:42 +0000 (0:00:00.238) 0:00:06.115 ****** 2025-09-19 11:57:51.508218 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:51.508228 | orchestrator | 2025-09-19 11:57:51.508239 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:57:51.508249 | orchestrator | Friday 19 September 2025 11:57:42 +0000 (0:00:00.249) 0:00:06.365 ****** 2025-09-19 11:57:51.508260 | orchestrator | 2025-09-19 11:57:51.508271 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:57:51.508282 | orchestrator | Friday 19 September 2025 11:57:42 +0000 (0:00:00.070) 0:00:06.435 ****** 2025-09-19 11:57:51.508292 | orchestrator | 2025-09-19 11:57:51.508303 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:57:51.508314 | orchestrator | Friday 19 September 2025 11:57:42 +0000 (0:00:00.069) 0:00:06.504 ****** 2025-09-19 11:57:51.508331 | orchestrator | 2025-09-19 11:57:51.508342 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 11:57:51.508353 | orchestrator | Friday 19 September 2025 11:57:42 +0000 (0:00:00.074) 0:00:06.579 ****** 2025-09-19 11:57:51.508363 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:51.508374 | orchestrator | 2025-09-19 11:57:51.508385 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-19 11:57:51.508396 | orchestrator | Friday 19 September 2025 11:57:43 +0000 (0:00:00.278) 0:00:06.858 ****** 2025-09-19 11:57:51.508407 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:51.508418 | orchestrator | 2025-09-19 11:57:51.508447 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-09-19 11:57:51.508458 | orchestrator | Friday 19 September 2025 11:57:43 +0000 (0:00:00.250) 0:00:07.109 ****** 2025-09-19 11:57:51.508469 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:51.508480 | orchestrator | 2025-09-19 11:57:51.508491 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-09-19 11:57:51.508501 | orchestrator | Friday 19 September 2025 11:57:43 +0000 (0:00:00.120) 0:00:07.229 ****** 2025-09-19 11:57:51.508512 | orchestrator | changed: [testbed-node-0] 2025-09-19 11:57:51.508523 | orchestrator | 2025-09-19 11:57:51.508534 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-09-19 11:57:51.508561 | orchestrator | Friday 19 September 2025 11:57:45 +0000 (0:00:02.033) 0:00:09.263 ****** 2025-09-19 11:57:51.508572 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:51.508582 | orchestrator | 2025-09-19 11:57:51.508593 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-09-19 11:57:51.508604 | orchestrator | Friday 19 September 2025 11:57:45 +0000 (0:00:00.265) 0:00:09.529 ****** 2025-09-19 11:57:51.508615 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:51.508626 | orchestrator | 2025-09-19 11:57:51.508636 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-09-19 11:57:51.508647 | orchestrator | Friday 19 September 2025 11:57:46 +0000 (0:00:00.715) 0:00:10.244 ****** 2025-09-19 11:57:51.508658 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:51.508669 | orchestrator | 2025-09-19 11:57:51.508679 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-09-19 11:57:51.508691 | orchestrator | Friday 19 September 2025 11:57:46 +0000 (0:00:00.140) 0:00:10.385 ****** 2025-09-19 11:57:51.508701 | orchestrator | ok: [testbed-node-0] 2025-09-19 11:57:51.508712 | orchestrator | 2025-09-19 11:57:51.508723 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 11:57:51.508734 | orchestrator | Friday 19 September 2025 11:57:46 +0000 (0:00:00.148) 0:00:10.533 ****** 2025-09-19 11:57:51.508744 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:51.508755 | orchestrator | 2025-09-19 11:57:51.508766 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 11:57:51.508777 | orchestrator | Friday 19 September 2025 11:57:47 +0000 (0:00:00.274) 0:00:10.808 ****** 2025-09-19 11:57:51.508788 | orchestrator | skipping: [testbed-node-0] 2025-09-19 11:57:51.508799 | orchestrator | 2025-09-19 11:57:51.508809 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 11:57:51.508820 | orchestrator | Friday 19 September 2025 11:57:47 +0000 (0:00:00.287) 0:00:11.096 ****** 2025-09-19 11:57:51.508831 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:51.508842 | orchestrator | 2025-09-19 11:57:51.508852 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 11:57:51.508863 | orchestrator | Friday 19 September 2025 11:57:48 +0000 (0:00:01.286) 0:00:12.382 ****** 2025-09-19 11:57:51.508874 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:51.508884 | orchestrator | 2025-09-19 11:57:51.508895 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 11:57:51.508906 | orchestrator | Friday 19 September 2025 11:57:48 +0000 (0:00:00.250) 0:00:12.633 ****** 2025-09-19 11:57:51.508923 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:51.508934 | orchestrator | 2025-09-19 11:57:51.508945 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:57:51.508956 | orchestrator | Friday 19 September 2025 11:57:49 +0000 (0:00:00.260) 0:00:12.894 ****** 2025-09-19 11:57:51.508966 | orchestrator | 2025-09-19 11:57:51.508977 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:57:51.508988 | orchestrator | Friday 19 September 2025 11:57:49 +0000 (0:00:00.082) 0:00:12.976 ****** 2025-09-19 11:57:51.508998 | orchestrator | 2025-09-19 11:57:51.509009 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:57:51.509020 | orchestrator | Friday 19 September 2025 11:57:49 +0000 (0:00:00.070) 0:00:13.047 ****** 2025-09-19 11:57:51.509031 | orchestrator | 2025-09-19 11:57:51.509042 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 11:57:51.509053 | orchestrator | Friday 19 September 2025 11:57:49 +0000 (0:00:00.070) 0:00:13.118 ****** 2025-09-19 11:57:51.509063 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 11:57:51.509074 | orchestrator | 2025-09-19 11:57:51.509085 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 11:57:51.509096 | orchestrator | Friday 19 September 2025 11:57:51 +0000 (0:00:01.672) 0:00:14.790 ****** 2025-09-19 11:57:51.509106 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-19 11:57:51.509117 | orchestrator |  "msg": [ 2025-09-19 11:57:51.509128 | orchestrator |  "Validator run completed.", 2025-09-19 11:57:51.509139 | orchestrator |  "You can find the report file here:", 2025-09-19 11:57:51.509150 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-09-19T11:57:37+00:00-report.json", 2025-09-19 11:57:51.509162 | orchestrator |  "on the following host:", 2025-09-19 11:57:51.509173 | orchestrator |  "testbed-manager" 2025-09-19 11:57:51.509184 | orchestrator |  ] 2025-09-19 11:57:51.509195 | orchestrator | } 2025-09-19 11:57:51.509207 | orchestrator | 2025-09-19 11:57:51.509217 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:57:51.509229 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 11:57:51.509242 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:57:51.509260 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 11:57:51.802653 | orchestrator | 2025-09-19 11:57:51.802749 | orchestrator | 2025-09-19 11:57:51.802762 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:57:51.802794 | orchestrator | Friday 19 September 2025 11:57:51 +0000 (0:00:00.401) 0:00:15.192 ****** 2025-09-19 11:57:51.802805 | orchestrator | =============================================================================== 2025-09-19 11:57:51.802815 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.03s 2025-09-19 11:57:51.802825 | orchestrator | Write report file ------------------------------------------------------- 1.67s 2025-09-19 11:57:51.802836 | orchestrator | Aggregate test results step one ----------------------------------------- 1.29s 2025-09-19 11:57:51.802846 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2025-09-19 11:57:51.802856 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2025-09-19 11:57:51.802865 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.72s 2025-09-19 11:57:51.802875 | orchestrator | Get timestamp for report file ------------------------------------------- 0.71s 2025-09-19 11:57:51.802884 | orchestrator | Aggregate test results step one ----------------------------------------- 0.67s 2025-09-19 11:57:51.802914 | orchestrator | Set test result to passed if container is existing ---------------------- 0.46s 2025-09-19 11:57:51.802925 | orchestrator | Print report file information ------------------------------------------- 0.40s 2025-09-19 11:57:51.802939 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.33s 2025-09-19 11:57:51.802949 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2025-09-19 11:57:51.802958 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-09-19 11:57:51.802968 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.29s 2025-09-19 11:57:51.802977 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.29s 2025-09-19 11:57:51.802987 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-09-19 11:57:51.802997 | orchestrator | Print report file information ------------------------------------------- 0.28s 2025-09-19 11:57:51.803006 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.27s 2025-09-19 11:57:51.803015 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.27s 2025-09-19 11:57:51.803025 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-09-19 11:57:52.064175 | orchestrator | + osism validate ceph-osds 2025-09-19 11:58:12.360134 | orchestrator | 2025-09-19 11:58:12.360240 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-09-19 11:58:12.360256 | orchestrator | 2025-09-19 11:58:12.360268 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 11:58:12.360280 | orchestrator | Friday 19 September 2025 11:58:08 +0000 (0:00:00.421) 0:00:00.421 ****** 2025-09-19 11:58:12.360292 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:58:12.360303 | orchestrator | 2025-09-19 11:58:12.360314 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 11:58:12.360325 | orchestrator | Friday 19 September 2025 11:58:08 +0000 (0:00:00.640) 0:00:01.062 ****** 2025-09-19 11:58:12.360336 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:58:12.360347 | orchestrator | 2025-09-19 11:58:12.360358 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 11:58:12.360368 | orchestrator | Friday 19 September 2025 11:58:09 +0000 (0:00:00.239) 0:00:01.301 ****** 2025-09-19 11:58:12.360380 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:58:12.360391 | orchestrator | 2025-09-19 11:58:12.360402 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 11:58:12.360413 | orchestrator | Friday 19 September 2025 11:58:10 +0000 (0:00:00.984) 0:00:02.286 ****** 2025-09-19 11:58:12.360424 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:12.360436 | orchestrator | 2025-09-19 11:58:12.360447 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-19 11:58:12.360458 | orchestrator | Friday 19 September 2025 11:58:10 +0000 (0:00:00.140) 0:00:02.426 ****** 2025-09-19 11:58:12.360469 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:12.360480 | orchestrator | 2025-09-19 11:58:12.360491 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-19 11:58:12.360502 | orchestrator | Friday 19 September 2025 11:58:10 +0000 (0:00:00.124) 0:00:02.551 ****** 2025-09-19 11:58:12.360512 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:12.360523 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:58:12.360534 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:58:12.360545 | orchestrator | 2025-09-19 11:58:12.360556 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-19 11:58:12.360566 | orchestrator | Friday 19 September 2025 11:58:10 +0000 (0:00:00.286) 0:00:02.837 ****** 2025-09-19 11:58:12.360577 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:12.360588 | orchestrator | 2025-09-19 11:58:12.360599 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-19 11:58:12.360610 | orchestrator | Friday 19 September 2025 11:58:10 +0000 (0:00:00.148) 0:00:02.986 ****** 2025-09-19 11:58:12.360641 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:12.360652 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:12.360663 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:12.360674 | orchestrator | 2025-09-19 11:58:12.360686 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-09-19 11:58:12.360698 | orchestrator | Friday 19 September 2025 11:58:11 +0000 (0:00:00.300) 0:00:03.287 ****** 2025-09-19 11:58:12.360710 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:12.360722 | orchestrator | 2025-09-19 11:58:12.360734 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 11:58:12.360775 | orchestrator | Friday 19 September 2025 11:58:11 +0000 (0:00:00.510) 0:00:03.798 ****** 2025-09-19 11:58:12.360787 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:12.360800 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:12.360812 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:12.360824 | orchestrator | 2025-09-19 11:58:12.360836 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-09-19 11:58:12.360848 | orchestrator | Friday 19 September 2025 11:58:12 +0000 (0:00:00.456) 0:00:04.255 ****** 2025-09-19 11:58:12.360862 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5839be0398ccbb931e8971d9ea3e966fd622fa9ece035fffac76f7e32e35c721', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-09-19 11:58:12.360877 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a30030a7de4d3df7dde04b15cfd281b5f6efbfdf90b94b918c8ea97ac8161786', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 11:58:12.360904 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c9fa0996477711abc536181a802931e065d335308ddc98ad77ac949375ef846a', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 11:58:12.360919 | orchestrator | skipping: [testbed-node-3] => (item={'id': '814058da3b5f8803f779c6e835f64517a6267a742c3439c384ec09daf7bb83fd', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 11:58:12.360933 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd0262e5956bb014391c6a31431cd0ba840740e4e96e64d35802ed8046dae83c0', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-09-19 11:58:12.360962 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5743c1eeef1050da570b5121093da5909e64b10b3f300886fda94f3e2caa41e8', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-19 11:58:12.360975 | orchestrator | skipping: [testbed-node-3] => (item={'id': '09102f636c492f8df85f302160886e8683140cd597f42f944d3bedde803844d7', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 11:58:12.360999 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a2f961da77d9b20f00dfda98c314103aa0ab50444a90fbfc8ef5c8e0c57b8a91', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 11:58:12.361011 | orchestrator | skipping: [testbed-node-3] => (item={'id': '87dd41b77c7c6646ed1d6daa205e233b7f98e1841500acd985b8b97db559adf5', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-19 11:58:12.361032 | orchestrator | skipping: [testbed-node-3] => (item={'id': '66803353085cd13abd6e6264e1910db4153d5b65a44c32a4f9efa6a51b9cfbc4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-19 11:58:12.361044 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd76d4aae6c44e307b323e031c47fa144185c7e58ae19501204a73fe3fc3c8f50', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-19 11:58:12.361056 | orchestrator | skipping: [testbed-node-3] => (item={'id': '27b38a34a7712004a11c708de8aef045895ea885d15554610ac8d5c5e17eb389', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 11:58:12.361068 | orchestrator | ok: [testbed-node-3] => (item={'id': 'fce3c929227a3ef9afc488d990f7922cc2b24873542b6eb92d84ab9622bcc2b9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 11:58:12.361080 | orchestrator | ok: [testbed-node-3] => (item={'id': '47ef14ae4ff7c34551e3c832537434a36cc05779dc96ecba76918ccd1f0cde5b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 11:58:12.361092 | orchestrator | skipping: [testbed-node-3] => (item={'id': '413abd61d2f371c0d9c5ec907b18264083897255958297aa8a6c9c5841b7897c', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-09-19 11:58:12.361103 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0c9a0cbfcb1120e9f94773f79d9be1b205d532fe7954aff1cd3a5c0c21ad91e1', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-19 11:58:12.361115 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9c86ecb8195853a1b3ed326c526eacd7ef0c84ab734535260136bf626fdb7b42', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-19 11:58:12.361126 | orchestrator | skipping: [testbed-node-3] => (item={'id': '69b9fe0134c749454337434f787bba2fdd02fc7a99db3fd3c11a583a52c46c6f', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 11:58:12.361138 | orchestrator | skipping: [testbed-node-3] => (item={'id': '24e3b0a199cc5072ab80f5201c519fbf26a504060b5d1bfe0d3ba7b84e5cefdf', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 11:58:12.361149 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1cdece5b25797a0a6829da8500992000fe7fa7abe284066b120e2e19b139046b', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 11:58:12.361166 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9bd15e8f689b887b65192354454d15b8efb747d1bb8280e10b9c297f8778f228', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-09-19 11:58:12.601396 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fae3f9cffc70179e23d696186bcdbf477b18168eacea4fdbc8f413391ee25312', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 11:58:12.601492 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1d116e59bb2ffc1058d83443f18d4589b0bf071ed0d87eaecb79611bc1a6e7b3', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 11:58:12.601528 | orchestrator | skipping: [testbed-node-4] => (item={'id': '03ff2cb435fe65739aebaca3c7901c6c5725c7a38217a5096b49c306a75d9498', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 11:58:12.601542 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'da1a339463c0b2db4e7b6a172c5ac1aae304e89806d27eaad01f77d11483b491', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-09-19 11:58:12.601554 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c637f7c788fbe373d0f29e7474f7c38bd42a38975e1c4c6a52d066b24d3ba4f0', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-19 11:58:12.601565 | orchestrator | skipping: [testbed-node-4] => (item={'id': '31ef20e86d9d2a6c5be257e8dde9bd49b824f789706a1a106c458f37119567dd', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 11:58:12.601578 | orchestrator | skipping: [testbed-node-4] => (item={'id': '20e9d7244d93e6f3120b4629630b550e83ba4783dcd42f9c368fc01c03fadc09', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 11:58:12.601589 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ac04134fec31e8f1f144aa41726c76959c4a478fa93e1fe1519deb45623562f2', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-19 11:58:12.601600 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4f837c66f05e12ef2047f218d463b6ae552cf4a5d7cc72654753f2b731fa022d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-19 11:58:12.601629 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bc64f6cb2080b1fadac6b21ef3acb34a62a6e0c3c667af6fd2c9a83bf5ab092f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-19 11:58:12.601642 | orchestrator | skipping: [testbed-node-4] => (item={'id': '23ce0ac4b6707c667fce1347d214a12eef8000e58d4ac2d64b3b4c9378141b52', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 11:58:12.601660 | orchestrator | ok: [testbed-node-4] => (item={'id': 'cfb4b2ed926150a52b4b88ca82584f1626c32c301d0805c8d7fbe06d50626381', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 11:58:12.601673 | orchestrator | ok: [testbed-node-4] => (item={'id': 'b002068e48091115be68a2bbe6e7f75e8646f1878cd9245d72b0829f9b707151', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 11:58:12.601685 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9342ee98d6d74f37cc8e10016da85c862e125e0482192fc8744770f68ff64503', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-09-19 11:58:12.601715 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c36cedd9ce54dc63f38139725c3d694a3d9d30c336478660bd1862528149212b', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-19 11:58:12.601737 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eb414a4fee1ff91e226292a3f2c2da26ca20e2f0ed87529904336c6c20553d1e', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-19 11:58:12.601807 | orchestrator | skipping: [testbed-node-4] => (item={'id': '36aa586b1072c72a0f90bbe8d1cc13da3e464315a47cac538b5d938fb689d019', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 11:58:12.601819 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7bc2e01fc46da2665e70325a0603b28bfbe55d4d4eb749e12550f76e6681949e', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 11:58:12.601830 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f728d10f3efa9b4b2b5b406e21658c0e2dbb9a9bd722981bcbc8d469ae710d5e', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 11:58:12.601841 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8584e3836f22f3192115eeefa6b19d8513fe72bc2ab69543e051fc665daa0573', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-09-19 11:58:12.601852 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9591f0af930b6af69e7d8829f3d68319ba348a8b250067f7fb33a5497b2849f9', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 11:58:12.601863 | orchestrator | skipping: [testbed-node-5] => (item={'id': '47d152624ecd43dc6f336d706e77f1fad0147e81f0d17f56c29c0305c57afbb0', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-09-19 11:58:12.601873 | orchestrator | skipping: [testbed-node-5] => (item={'id': '01a4f43e0d9baae636530deab9dc875ff87f5ff505db8628e54704a042944c3f', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 11:58:12.601884 | orchestrator | skipping: [testbed-node-5] => (item={'id': '538883f1d273c5731f3490cea73c29e2d576cefe0168b25dc0aa5ad446781418', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-09-19 11:58:12.601895 | orchestrator | skipping: [testbed-node-5] => (item={'id': '577fb7c39608c7e3605e2db668765af288b5153dca300fda9f1760445f5670ea', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-19 11:58:12.601907 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4f0116702d36f2470a650280ad2785628ab557d42f179eeabcb2aeb33e2ba7de', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 11:58:12.601924 | orchestrator | skipping: [testbed-node-5] => (item={'id': '437afbeb0d0020f457030ede3a99f3ff181ea8576c3d1bf6e2ab5bcb42057a83', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-09-19 11:58:12.601935 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3464d6f37aa55ad7104d9b0629b8a55dcc440d5c09606d5e0096fcdcebbe8872', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-09-19 11:58:12.601946 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9714eeb2f231ed7e3ebebb4b33a59e0e89c43a50a29cc867bedf39a9d75c7273', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-19 11:58:12.601972 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b6f19a08e379957aad9614d13bb99e648f5f1b3250ec95ba697b30592f461d3b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-19 11:58:19.895137 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fdcdb267d9dbf59a87221fa7e7373fc7eb96a5cdb712da9160e0ffff2cca2667', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 11:58:19.895240 | orchestrator | ok: [testbed-node-5] => (item={'id': '35a3ac46e747ab497c28fc08f311c2bb9d15f4f6415c661f3149f18a9cc2a824', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 11:58:19.895256 | orchestrator | ok: [testbed-node-5] => (item={'id': '55cb592ab07f65645d40fe0745fe45e82bcefd0a77220d0c32373dcce2115686', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 11:58:19.895269 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'baeefa180763a2d8b1299dc098961ea08f3d0e5987c4f421a31576f70485ec3a', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-09-19 11:58:19.895282 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5f753061dbc5f3bd705b38b5aeb9dd3c19eb05aa68c9acf05830d14450ea4b2c', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-19 11:58:19.895295 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8d5ac0b233b9389b110e3ef13a71f6edc7d9e34dbc8c522f758be8ad73526d42', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-09-19 11:58:19.895306 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e6a6bd9e843632dd61c2e40bb3cc8cb1dedc34f1719d38e26d2f2cff926c127a', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 11:58:19.895318 | orchestrator | skipping: [testbed-node-5] => (item={'id': '70c81ea1e3db3f2f92b152c40a9e0848a90e2aff18d6d3fb2245d82290e34061', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-09-19 11:58:19.895329 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2598cfaf68b7de49b59352958218328bd671b1e856e536564b3f7f2bc4d1a7b6', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 11:58:19.895340 | orchestrator | 2025-09-19 11:58:19.895353 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-09-19 11:58:19.895365 | orchestrator | Friday 19 September 2025 11:58:12 +0000 (0:00:00.490) 0:00:04.745 ****** 2025-09-19 11:58:19.895376 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:19.895388 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:19.895402 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:19.895420 | orchestrator | 2025-09-19 11:58:19.895438 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-09-19 11:58:19.895455 | orchestrator | Friday 19 September 2025 11:58:12 +0000 (0:00:00.288) 0:00:05.034 ****** 2025-09-19 11:58:19.895475 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:19.895494 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:58:19.895511 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:58:19.895544 | orchestrator | 2025-09-19 11:58:19.895571 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-09-19 11:58:19.895583 | orchestrator | Friday 19 September 2025 11:58:13 +0000 (0:00:00.287) 0:00:05.322 ****** 2025-09-19 11:58:19.895593 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:19.895604 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:19.895615 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:19.895625 | orchestrator | 2025-09-19 11:58:19.895636 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 11:58:19.895647 | orchestrator | Friday 19 September 2025 11:58:13 +0000 (0:00:00.480) 0:00:05.803 ****** 2025-09-19 11:58:19.895658 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:19.895668 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:19.895680 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:19.895692 | orchestrator | 2025-09-19 11:58:19.895704 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-09-19 11:58:19.895716 | orchestrator | Friday 19 September 2025 11:58:13 +0000 (0:00:00.287) 0:00:06.090 ****** 2025-09-19 11:58:19.895728 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-09-19 11:58:19.895742 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-09-19 11:58:19.895753 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:19.895766 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-09-19 11:58:19.895778 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-09-19 11:58:19.895831 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:58:19.895845 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-09-19 11:58:19.895857 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-09-19 11:58:19.895869 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:58:19.895881 | orchestrator | 2025-09-19 11:58:19.895894 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-09-19 11:58:19.895906 | orchestrator | Friday 19 September 2025 11:58:14 +0000 (0:00:00.304) 0:00:06.395 ****** 2025-09-19 11:58:19.895918 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:19.895930 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:19.895942 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:19.895954 | orchestrator | 2025-09-19 11:58:19.895966 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-19 11:58:19.895978 | orchestrator | Friday 19 September 2025 11:58:14 +0000 (0:00:00.306) 0:00:06.701 ****** 2025-09-19 11:58:19.895989 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:19.896002 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:58:19.896014 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:58:19.896027 | orchestrator | 2025-09-19 11:58:19.896038 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-19 11:58:19.896049 | orchestrator | Friday 19 September 2025 11:58:15 +0000 (0:00:00.468) 0:00:07.170 ****** 2025-09-19 11:58:19.896059 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:19.896070 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:58:19.896081 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:58:19.896092 | orchestrator | 2025-09-19 11:58:19.896102 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-09-19 11:58:19.896113 | orchestrator | Friday 19 September 2025 11:58:15 +0000 (0:00:00.282) 0:00:07.453 ****** 2025-09-19 11:58:19.896124 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:19.896135 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:19.896145 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:19.896156 | orchestrator | 2025-09-19 11:58:19.896167 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 11:58:19.896178 | orchestrator | Friday 19 September 2025 11:58:15 +0000 (0:00:00.288) 0:00:07.741 ****** 2025-09-19 11:58:19.896197 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:19.896208 | orchestrator | 2025-09-19 11:58:19.896219 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 11:58:19.896230 | orchestrator | Friday 19 September 2025 11:58:15 +0000 (0:00:00.221) 0:00:07.963 ****** 2025-09-19 11:58:19.896241 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:19.896251 | orchestrator | 2025-09-19 11:58:19.896262 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 11:58:19.896273 | orchestrator | Friday 19 September 2025 11:58:16 +0000 (0:00:00.231) 0:00:08.194 ****** 2025-09-19 11:58:19.896284 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:19.896295 | orchestrator | 2025-09-19 11:58:19.896307 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:58:19.896317 | orchestrator | Friday 19 September 2025 11:58:16 +0000 (0:00:00.237) 0:00:08.432 ****** 2025-09-19 11:58:19.896328 | orchestrator | 2025-09-19 11:58:19.896339 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:58:19.896350 | orchestrator | Friday 19 September 2025 11:58:16 +0000 (0:00:00.066) 0:00:08.498 ****** 2025-09-19 11:58:19.896360 | orchestrator | 2025-09-19 11:58:19.896371 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:58:19.896382 | orchestrator | Friday 19 September 2025 11:58:16 +0000 (0:00:00.062) 0:00:08.560 ****** 2025-09-19 11:58:19.896393 | orchestrator | 2025-09-19 11:58:19.896403 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 11:58:19.896414 | orchestrator | Friday 19 September 2025 11:58:16 +0000 (0:00:00.239) 0:00:08.800 ****** 2025-09-19 11:58:19.896425 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:19.896435 | orchestrator | 2025-09-19 11:58:19.896446 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-09-19 11:58:19.896457 | orchestrator | Friday 19 September 2025 11:58:16 +0000 (0:00:00.251) 0:00:09.051 ****** 2025-09-19 11:58:19.896468 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:19.896478 | orchestrator | 2025-09-19 11:58:19.896489 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 11:58:19.896500 | orchestrator | Friday 19 September 2025 11:58:17 +0000 (0:00:00.262) 0:00:09.313 ****** 2025-09-19 11:58:19.896511 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:19.896522 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:19.896533 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:19.896543 | orchestrator | 2025-09-19 11:58:19.896554 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-09-19 11:58:19.896565 | orchestrator | Friday 19 September 2025 11:58:17 +0000 (0:00:00.306) 0:00:09.620 ****** 2025-09-19 11:58:19.896576 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:19.896587 | orchestrator | 2025-09-19 11:58:19.896597 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-09-19 11:58:19.896608 | orchestrator | Friday 19 September 2025 11:58:17 +0000 (0:00:00.254) 0:00:09.875 ****** 2025-09-19 11:58:19.896619 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 11:58:19.896630 | orchestrator | 2025-09-19 11:58:19.896641 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-09-19 11:58:19.896651 | orchestrator | Friday 19 September 2025 11:58:19 +0000 (0:00:01.636) 0:00:11.511 ****** 2025-09-19 11:58:19.896662 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:19.896673 | orchestrator | 2025-09-19 11:58:19.896684 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-09-19 11:58:19.896695 | orchestrator | Friday 19 September 2025 11:58:19 +0000 (0:00:00.124) 0:00:11.636 ****** 2025-09-19 11:58:19.896705 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:19.896716 | orchestrator | 2025-09-19 11:58:19.896727 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-09-19 11:58:19.896738 | orchestrator | Friday 19 September 2025 11:58:19 +0000 (0:00:00.312) 0:00:11.949 ****** 2025-09-19 11:58:19.896762 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:32.696254 | orchestrator | 2025-09-19 11:58:32.696364 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-09-19 11:58:32.696379 | orchestrator | Friday 19 September 2025 11:58:19 +0000 (0:00:00.099) 0:00:12.048 ****** 2025-09-19 11:58:32.696390 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:32.696401 | orchestrator | 2025-09-19 11:58:32.696411 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 11:58:32.696421 | orchestrator | Friday 19 September 2025 11:58:20 +0000 (0:00:00.157) 0:00:12.205 ****** 2025-09-19 11:58:32.696431 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:32.696441 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:32.696451 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:32.696460 | orchestrator | 2025-09-19 11:58:32.696470 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-09-19 11:58:32.696480 | orchestrator | Friday 19 September 2025 11:58:20 +0000 (0:00:00.482) 0:00:12.688 ****** 2025-09-19 11:58:32.696490 | orchestrator | changed: [testbed-node-3] 2025-09-19 11:58:32.696549 | orchestrator | changed: [testbed-node-4] 2025-09-19 11:58:32.696560 | orchestrator | changed: [testbed-node-5] 2025-09-19 11:58:32.696570 | orchestrator | 2025-09-19 11:58:32.696580 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-09-19 11:58:32.696590 | orchestrator | Friday 19 September 2025 11:58:22 +0000 (0:00:02.462) 0:00:15.151 ****** 2025-09-19 11:58:32.696600 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:32.696609 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:32.696619 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:32.696628 | orchestrator | 2025-09-19 11:58:32.696638 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-09-19 11:58:32.696648 | orchestrator | Friday 19 September 2025 11:58:23 +0000 (0:00:00.285) 0:00:15.436 ****** 2025-09-19 11:58:32.696657 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:32.696667 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:32.696676 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:32.696686 | orchestrator | 2025-09-19 11:58:32.696695 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-09-19 11:58:32.696705 | orchestrator | Friday 19 September 2025 11:58:23 +0000 (0:00:00.488) 0:00:15.925 ****** 2025-09-19 11:58:32.696714 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:32.696724 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:58:32.696734 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:58:32.696744 | orchestrator | 2025-09-19 11:58:32.696753 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-09-19 11:58:32.696763 | orchestrator | Friday 19 September 2025 11:58:24 +0000 (0:00:00.539) 0:00:16.465 ****** 2025-09-19 11:58:32.696773 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:32.696782 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:32.696792 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:32.696803 | orchestrator | 2025-09-19 11:58:32.696814 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-09-19 11:58:32.696825 | orchestrator | Friday 19 September 2025 11:58:24 +0000 (0:00:00.298) 0:00:16.763 ****** 2025-09-19 11:58:32.696835 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:32.696846 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:58:32.696857 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:58:32.696868 | orchestrator | 2025-09-19 11:58:32.696879 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-09-19 11:58:32.696890 | orchestrator | Friday 19 September 2025 11:58:24 +0000 (0:00:00.290) 0:00:17.054 ****** 2025-09-19 11:58:32.696901 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:32.696912 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:58:32.696922 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:58:32.696959 | orchestrator | 2025-09-19 11:58:32.696970 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 11:58:32.696981 | orchestrator | Friday 19 September 2025 11:58:25 +0000 (0:00:00.290) 0:00:17.345 ****** 2025-09-19 11:58:32.697011 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:32.697022 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:32.697033 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:32.697043 | orchestrator | 2025-09-19 11:58:32.697054 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-09-19 11:58:32.697065 | orchestrator | Friday 19 September 2025 11:58:25 +0000 (0:00:00.718) 0:00:18.063 ****** 2025-09-19 11:58:32.697076 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:32.697087 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:32.697098 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:32.697108 | orchestrator | 2025-09-19 11:58:32.697119 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-09-19 11:58:32.697135 | orchestrator | Friday 19 September 2025 11:58:26 +0000 (0:00:00.476) 0:00:18.540 ****** 2025-09-19 11:58:32.697147 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:32.697157 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:32.697167 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:32.697176 | orchestrator | 2025-09-19 11:58:32.697186 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-09-19 11:58:32.697196 | orchestrator | Friday 19 September 2025 11:58:26 +0000 (0:00:00.292) 0:00:18.832 ****** 2025-09-19 11:58:32.697206 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:32.697216 | orchestrator | skipping: [testbed-node-4] 2025-09-19 11:58:32.697225 | orchestrator | skipping: [testbed-node-5] 2025-09-19 11:58:32.697235 | orchestrator | 2025-09-19 11:58:32.697244 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-09-19 11:58:32.697254 | orchestrator | Friday 19 September 2025 11:58:26 +0000 (0:00:00.313) 0:00:19.145 ****** 2025-09-19 11:58:32.697263 | orchestrator | ok: [testbed-node-3] 2025-09-19 11:58:32.697273 | orchestrator | ok: [testbed-node-4] 2025-09-19 11:58:32.697283 | orchestrator | ok: [testbed-node-5] 2025-09-19 11:58:32.697292 | orchestrator | 2025-09-19 11:58:32.697302 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 11:58:32.697312 | orchestrator | Friday 19 September 2025 11:58:27 +0000 (0:00:00.501) 0:00:19.646 ****** 2025-09-19 11:58:32.697321 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:58:32.697331 | orchestrator | 2025-09-19 11:58:32.697341 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 11:58:32.697350 | orchestrator | Friday 19 September 2025 11:58:27 +0000 (0:00:00.249) 0:00:19.896 ****** 2025-09-19 11:58:32.697360 | orchestrator | skipping: [testbed-node-3] 2025-09-19 11:58:32.697370 | orchestrator | 2025-09-19 11:58:32.697395 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 11:58:32.697405 | orchestrator | Friday 19 September 2025 11:58:28 +0000 (0:00:00.274) 0:00:20.171 ****** 2025-09-19 11:58:32.697415 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:58:32.697425 | orchestrator | 2025-09-19 11:58:32.697434 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 11:58:32.697444 | orchestrator | Friday 19 September 2025 11:58:29 +0000 (0:00:01.595) 0:00:21.767 ****** 2025-09-19 11:58:32.697453 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:58:32.697462 | orchestrator | 2025-09-19 11:58:32.697472 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 11:58:32.697481 | orchestrator | Friday 19 September 2025 11:58:29 +0000 (0:00:00.258) 0:00:22.025 ****** 2025-09-19 11:58:32.697491 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:58:32.697500 | orchestrator | 2025-09-19 11:58:32.697510 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:58:32.697519 | orchestrator | Friday 19 September 2025 11:58:30 +0000 (0:00:00.265) 0:00:22.291 ****** 2025-09-19 11:58:32.697529 | orchestrator | 2025-09-19 11:58:32.697538 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:58:32.697555 | orchestrator | Friday 19 September 2025 11:58:30 +0000 (0:00:00.065) 0:00:22.356 ****** 2025-09-19 11:58:32.697565 | orchestrator | 2025-09-19 11:58:32.697574 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 11:58:32.697583 | orchestrator | Friday 19 September 2025 11:58:30 +0000 (0:00:00.066) 0:00:22.422 ****** 2025-09-19 11:58:32.697593 | orchestrator | 2025-09-19 11:58:32.697602 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 11:58:32.697612 | orchestrator | Friday 19 September 2025 11:58:30 +0000 (0:00:00.074) 0:00:22.497 ****** 2025-09-19 11:58:32.697622 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 11:58:32.697631 | orchestrator | 2025-09-19 11:58:32.697640 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 11:58:32.697650 | orchestrator | Friday 19 September 2025 11:58:31 +0000 (0:00:01.538) 0:00:24.036 ****** 2025-09-19 11:58:32.697659 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-09-19 11:58:32.697669 | orchestrator |  "msg": [ 2025-09-19 11:58:32.697679 | orchestrator |  "Validator run completed.", 2025-09-19 11:58:32.697689 | orchestrator |  "You can find the report file here:", 2025-09-19 11:58:32.697698 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-09-19T11:58:08+00:00-report.json", 2025-09-19 11:58:32.697709 | orchestrator |  "on the following host:", 2025-09-19 11:58:32.697719 | orchestrator |  "testbed-manager" 2025-09-19 11:58:32.697729 | orchestrator |  ] 2025-09-19 11:58:32.697739 | orchestrator | } 2025-09-19 11:58:32.697749 | orchestrator | 2025-09-19 11:58:32.697758 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 11:58:32.697769 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-09-19 11:58:32.697781 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 11:58:32.697790 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 11:58:32.697800 | orchestrator | 2025-09-19 11:58:32.697810 | orchestrator | 2025-09-19 11:58:32.697819 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 11:58:32.697829 | orchestrator | Friday 19 September 2025 11:58:32 +0000 (0:00:00.793) 0:00:24.829 ****** 2025-09-19 11:58:32.697838 | orchestrator | =============================================================================== 2025-09-19 11:58:32.697848 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.46s 2025-09-19 11:58:32.697857 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.64s 2025-09-19 11:58:32.697871 | orchestrator | Aggregate test results step one ----------------------------------------- 1.60s 2025-09-19 11:58:32.697881 | orchestrator | Write report file ------------------------------------------------------- 1.54s 2025-09-19 11:58:32.697890 | orchestrator | Create report output directory ------------------------------------------ 0.98s 2025-09-19 11:58:32.697900 | orchestrator | Print report file information ------------------------------------------- 0.79s 2025-09-19 11:58:32.697909 | orchestrator | Prepare test data ------------------------------------------------------- 0.72s 2025-09-19 11:58:32.697918 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-09-19 11:58:32.697943 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.54s 2025-09-19 11:58:32.697953 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.51s 2025-09-19 11:58:32.697963 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.50s 2025-09-19 11:58:32.697973 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.49s 2025-09-19 11:58:32.697982 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2025-09-19 11:58:32.697998 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-09-19 11:58:32.698008 | orchestrator | Set test result to passed if count matches ------------------------------ 0.48s 2025-09-19 11:58:32.698127 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.48s 2025-09-19 11:58:32.698150 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.47s 2025-09-19 11:58:32.984247 | orchestrator | Prepare test data ------------------------------------------------------- 0.46s 2025-09-19 11:58:32.984349 | orchestrator | Flush handlers ---------------------------------------------------------- 0.37s 2025-09-19 11:58:32.984363 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.31s 2025-09-19 11:58:33.282342 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-09-19 11:58:33.288626 | orchestrator | + set -e 2025-09-19 11:58:33.288691 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 11:58:33.288705 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 11:58:33.288716 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 11:58:33.288727 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 11:58:33.288738 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 11:58:33.288749 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 11:58:33.288760 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 11:58:33.288772 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 11:58:33.288782 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 11:58:33.288793 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 11:58:33.288804 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 11:58:33.288814 | orchestrator | ++ export ARA=false 2025-09-19 11:58:33.288825 | orchestrator | ++ ARA=false 2025-09-19 11:58:33.288836 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 11:58:33.288847 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 11:58:33.288858 | orchestrator | ++ export TEMPEST=false 2025-09-19 11:58:33.288869 | orchestrator | ++ TEMPEST=false 2025-09-19 11:58:33.288880 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 11:58:33.288891 | orchestrator | ++ IS_ZUUL=true 2025-09-19 11:58:33.288902 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-09-19 11:58:33.288913 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.246 2025-09-19 11:58:33.288923 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 11:58:33.288971 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 11:58:33.288983 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 11:58:33.288993 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 11:58:33.289004 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 11:58:33.289015 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 11:58:33.289026 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 11:58:33.289037 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 11:58:33.289047 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-19 11:58:33.289058 | orchestrator | + source /etc/os-release 2025-09-19 11:58:33.289068 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-09-19 11:58:33.289079 | orchestrator | ++ NAME=Ubuntu 2025-09-19 11:58:33.289090 | orchestrator | ++ VERSION_ID=24.04 2025-09-19 11:58:33.289100 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-09-19 11:58:33.289111 | orchestrator | ++ VERSION_CODENAME=noble 2025-09-19 11:58:33.289122 | orchestrator | ++ ID=ubuntu 2025-09-19 11:58:33.289133 | orchestrator | ++ ID_LIKE=debian 2025-09-19 11:58:33.289143 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-09-19 11:58:33.289154 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-09-19 11:58:33.289165 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-09-19 11:58:33.289176 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-09-19 11:58:33.289188 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-09-19 11:58:33.289199 | orchestrator | ++ LOGO=ubuntu-logo 2025-09-19 11:58:33.289209 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-09-19 11:58:33.289221 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-09-19 11:58:33.289233 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-19 11:58:33.329408 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-19 11:58:56.567019 | orchestrator | 2025-09-19 11:58:56.567204 | orchestrator | # Status of Elasticsearch 2025-09-19 11:58:56.567235 | orchestrator | 2025-09-19 11:58:56.567256 | orchestrator | + pushd /opt/configuration/contrib 2025-09-19 11:58:56.567307 | orchestrator | + echo 2025-09-19 11:58:56.567329 | orchestrator | + echo '# Status of Elasticsearch' 2025-09-19 11:58:56.567348 | orchestrator | + echo 2025-09-19 11:58:56.567367 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-09-19 11:58:56.761094 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-09-19 11:58:56.761246 | orchestrator | 2025-09-19 11:58:56.761276 | orchestrator | # Status of MariaDB 2025-09-19 11:58:56.761300 | orchestrator | 2025-09-19 11:58:56.761321 | orchestrator | + echo 2025-09-19 11:58:56.761341 | orchestrator | + echo '# Status of MariaDB' 2025-09-19 11:58:56.761354 | orchestrator | + echo 2025-09-19 11:58:56.761365 | orchestrator | + MARIADB_USER=root_shard_0 2025-09-19 11:58:56.761377 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-09-19 11:58:56.832673 | orchestrator | Reading package lists... 2025-09-19 11:58:57.183122 | orchestrator | Building dependency tree... 2025-09-19 11:58:57.183395 | orchestrator | Reading state information... 2025-09-19 11:58:57.565587 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-09-19 11:58:57.565685 | orchestrator | bc set to manually installed. 2025-09-19 11:58:57.565700 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 2025-09-19 11:58:58.241406 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-09-19 11:58:58.241801 | orchestrator | 2025-09-19 11:58:58.241906 | orchestrator | # Status of Prometheus 2025-09-19 11:58:58.241930 | orchestrator | 2025-09-19 11:58:58.241948 | orchestrator | + echo 2025-09-19 11:58:58.241967 | orchestrator | + echo '# Status of Prometheus' 2025-09-19 11:58:58.241984 | orchestrator | + echo 2025-09-19 11:58:58.242000 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-09-19 11:58:58.304079 | orchestrator | Unauthorized 2025-09-19 11:58:58.307695 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-09-19 11:58:58.378490 | orchestrator | Unauthorized 2025-09-19 11:58:58.381814 | orchestrator | 2025-09-19 11:58:58.381845 | orchestrator | # Status of RabbitMQ 2025-09-19 11:58:58.381855 | orchestrator | 2025-09-19 11:58:58.381863 | orchestrator | + echo 2025-09-19 11:58:58.381871 | orchestrator | + echo '# Status of RabbitMQ' 2025-09-19 11:58:58.381880 | orchestrator | + echo 2025-09-19 11:58:58.381889 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-09-19 11:58:58.823113 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-09-19 11:58:58.838488 | orchestrator | 2025-09-19 11:58:58.838553 | orchestrator | # Status of Redis 2025-09-19 11:58:58.838562 | orchestrator | 2025-09-19 11:58:58.838568 | orchestrator | + echo 2025-09-19 11:58:58.838574 | orchestrator | + echo '# Status of Redis' 2025-09-19 11:58:58.838580 | orchestrator | + echo 2025-09-19 11:58:58.838588 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-09-19 11:58:58.846131 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001592s;;;0.000000;10.000000 2025-09-19 11:58:58.846891 | orchestrator | 2025-09-19 11:58:58.846999 | orchestrator | + popd 2025-09-19 11:58:58.847011 | orchestrator | + echo 2025-09-19 11:58:58.847022 | orchestrator | # Create backup of MariaDB database 2025-09-19 11:58:58.847034 | orchestrator | + echo '# Create backup of MariaDB database' 2025-09-19 11:58:58.847044 | orchestrator | + echo 2025-09-19 11:58:58.847057 | orchestrator | 2025-09-19 11:58:58.847068 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-09-19 11:59:00.793479 | orchestrator | 2025-09-19 11:59:00 | INFO  | Task 5c681625-5d51-4956-8d28-9379296e926e (mariadb_backup) was prepared for execution. 2025-09-19 11:59:00.793578 | orchestrator | 2025-09-19 11:59:00 | INFO  | It takes a moment until task 5c681625-5d51-4956-8d28-9379296e926e (mariadb_backup) has been started and output is visible here. 2025-09-19 12:01:01.455126 | orchestrator | 2025-09-19 12:01:01.455236 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 12:01:01.455276 | orchestrator | 2025-09-19 12:01:01.455289 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 12:01:01.455301 | orchestrator | Friday 19 September 2025 11:59:04 +0000 (0:00:00.176) 0:00:00.176 ****** 2025-09-19 12:01:01.455312 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:01:01.455325 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:01:01.455336 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:01:01.455346 | orchestrator | 2025-09-19 12:01:01.455357 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 12:01:01.455369 | orchestrator | Friday 19 September 2025 11:59:04 +0000 (0:00:00.296) 0:00:00.473 ****** 2025-09-19 12:01:01.455380 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-19 12:01:01.455391 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-19 12:01:01.455402 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-19 12:01:01.455413 | orchestrator | 2025-09-19 12:01:01.455424 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-19 12:01:01.455435 | orchestrator | 2025-09-19 12:01:01.455446 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-19 12:01:01.455457 | orchestrator | Friday 19 September 2025 11:59:05 +0000 (0:00:00.547) 0:00:01.020 ****** 2025-09-19 12:01:01.455468 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 12:01:01.455479 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 12:01:01.455490 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 12:01:01.455501 | orchestrator | 2025-09-19 12:01:01.455511 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 12:01:01.455523 | orchestrator | Friday 19 September 2025 11:59:05 +0000 (0:00:00.395) 0:00:01.416 ****** 2025-09-19 12:01:01.455535 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 12:01:01.455546 | orchestrator | 2025-09-19 12:01:01.455557 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-09-19 12:01:01.455568 | orchestrator | Friday 19 September 2025 11:59:06 +0000 (0:00:00.552) 0:00:01.968 ****** 2025-09-19 12:01:01.455579 | orchestrator | ok: [testbed-node-1] 2025-09-19 12:01:01.455590 | orchestrator | ok: [testbed-node-0] 2025-09-19 12:01:01.455601 | orchestrator | ok: [testbed-node-2] 2025-09-19 12:01:01.455612 | orchestrator | 2025-09-19 12:01:01.455622 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-09-19 12:01:01.455634 | orchestrator | Friday 19 September 2025 11:59:09 +0000 (0:00:03.176) 0:00:05.145 ****** 2025-09-19 12:01:01.455646 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-19 12:01:01.455660 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-09-19 12:01:01.455673 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 12:01:01.455686 | orchestrator | mariadb_bootstrap_restart 2025-09-19 12:01:01.455699 | orchestrator | skipping: [testbed-node-1] 2025-09-19 12:01:01.455711 | orchestrator | skipping: [testbed-node-2] 2025-09-19 12:01:01.455724 | orchestrator | changed: [testbed-node-0] 2025-09-19 12:01:01.455743 | orchestrator | 2025-09-19 12:01:01.455762 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-19 12:01:01.455781 | orchestrator | skipping: no hosts matched 2025-09-19 12:01:01.455798 | orchestrator | 2025-09-19 12:01:01.455816 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 12:01:01.455834 | orchestrator | skipping: no hosts matched 2025-09-19 12:01:01.455852 | orchestrator | 2025-09-19 12:01:01.455870 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-19 12:01:01.455890 | orchestrator | skipping: no hosts matched 2025-09-19 12:01:01.455909 | orchestrator | 2025-09-19 12:01:01.455928 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-19 12:01:01.455950 | orchestrator | 2025-09-19 12:01:01.455984 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-19 12:01:01.455997 | orchestrator | Friday 19 September 2025 12:01:00 +0000 (0:01:50.840) 0:01:55.985 ****** 2025-09-19 12:01:01.456010 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:01:01.456022 | orchestrator | skipping: [testbed-node-1] 2025-09-19 12:01:01.456033 | orchestrator | skipping: [testbed-node-2] 2025-09-19 12:01:01.456085 | orchestrator | 2025-09-19 12:01:01.456096 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-19 12:01:01.456107 | orchestrator | Friday 19 September 2025 12:01:00 +0000 (0:00:00.308) 0:01:56.293 ****** 2025-09-19 12:01:01.456118 | orchestrator | skipping: [testbed-node-0] 2025-09-19 12:01:01.456129 | orchestrator | skipping: [testbed-node-1] 2025-09-19 12:01:01.456140 | orchestrator | skipping: [testbed-node-2] 2025-09-19 12:01:01.456151 | orchestrator | 2025-09-19 12:01:01.456162 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 12:01:01.456228 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 12:01:01.456243 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 12:01:01.456255 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 12:01:01.456266 | orchestrator | 2025-09-19 12:01:01.456277 | orchestrator | 2025-09-19 12:01:01.456288 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 12:01:01.456299 | orchestrator | Friday 19 September 2025 12:01:01 +0000 (0:00:00.397) 0:01:56.691 ****** 2025-09-19 12:01:01.456310 | orchestrator | =============================================================================== 2025-09-19 12:01:01.456321 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 110.84s 2025-09-19 12:01:01.456353 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.18s 2025-09-19 12:01:01.456364 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.55s 2025-09-19 12:01:01.456375 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2025-09-19 12:01:01.456386 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.40s 2025-09-19 12:01:01.456415 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2025-09-19 12:01:01.456426 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2025-09-19 12:01:01.456437 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-19 12:01:01.712196 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-09-19 12:01:01.719403 | orchestrator | + set -e 2025-09-19 12:01:01.719541 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 12:01:01.719558 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 12:01:01.719570 | orchestrator | ++ INTERACTIVE=false 2025-09-19 12:01:01.719581 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 12:01:01.719593 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 12:01:01.719612 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 12:01:01.720968 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 12:01:01.727485 | orchestrator | 2025-09-19 12:01:01.727537 | orchestrator | # OpenStack endpoints 2025-09-19 12:01:01.727553 | orchestrator | 2025-09-19 12:01:01.727567 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-09-19 12:01:01.727581 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-09-19 12:01:01.727595 | orchestrator | + export OS_CLOUD=admin 2025-09-19 12:01:01.727608 | orchestrator | + OS_CLOUD=admin 2025-09-19 12:01:01.727621 | orchestrator | + echo 2025-09-19 12:01:01.727634 | orchestrator | + echo '# OpenStack endpoints' 2025-09-19 12:01:01.727647 | orchestrator | + echo 2025-09-19 12:01:01.727660 | orchestrator | + openstack endpoint list 2025-09-19 12:01:04.773783 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 12:01:04.773843 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-09-19 12:01:04.773848 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 12:01:04.773853 | orchestrator | | 01a1d7f38a744e54b72d0b411d54f098 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-09-19 12:01:04.773857 | orchestrator | | 0ce53df6314d4f59805b470fc98313d0 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-09-19 12:01:04.773861 | orchestrator | | 0f9753f5eb7a4500a07c4b8c4e3cd257 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-09-19 12:01:04.773865 | orchestrator | | 17738825cee94a42aca4ed169be1ce26 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-09-19 12:01:04.773869 | orchestrator | | 25775a0c088a4f768fd2831d8f21229e | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-19 12:01:04.773873 | orchestrator | | 404bc37a04514feeb44f22fb9b610dcc | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-19 12:01:04.773883 | orchestrator | | 4fd6e8e552d442ab960d44461758629f | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-09-19 12:01:04.773887 | orchestrator | | 5967ff76a9834e04ad1ce2a9e2946022 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-09-19 12:01:04.773891 | orchestrator | | 64c2a70affc04d0d988e539ca45f65c3 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-09-19 12:01:04.773896 | orchestrator | | 6d57ce503ad7465bbb084375cc5de771 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-09-19 12:01:04.773904 | orchestrator | | 800c68d986df4821b99156a816db357f | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-09-19 12:01:04.773913 | orchestrator | | 871b0fcea2f6415aa30f5e96bec41d2b | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-09-19 12:01:04.773919 | orchestrator | | 8724c41cced548cd9dd1583a6ffd0d27 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-09-19 12:01:04.773925 | orchestrator | | 96beac660c3145a8bf617e84d34bcd68 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-09-19 12:01:04.773932 | orchestrator | | a025c1855a96403da473fa32deb0f0df | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-09-19 12:01:04.773938 | orchestrator | | a2b144cc49df4ae78505dbc834d77dee | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-09-19 12:01:04.773944 | orchestrator | | a4ae08557d2d4440be8e8142d35cd1f5 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-09-19 12:01:04.773951 | orchestrator | | af0e72e1b07a41f6b98af90e25cdceff | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-09-19 12:01:04.773961 | orchestrator | | cb969766039847918058df5fce0899c5 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-19 12:01:04.773969 | orchestrator | | f8dedefb8afb4128b13465c3daf81dc5 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-09-19 12:01:04.773984 | orchestrator | | fa5dfaf0e8c64908bea05ae4066f5151 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-09-19 12:01:04.773991 | orchestrator | | fb10d249b6ff4666be6e8b105665eda1 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-19 12:01:04.773997 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 12:01:04.930671 | orchestrator | 2025-09-19 12:01:04.930734 | orchestrator | # Cinder 2025-09-19 12:01:04.930746 | orchestrator | 2025-09-19 12:01:04.930756 | orchestrator | + echo 2025-09-19 12:01:04.930765 | orchestrator | + echo '# Cinder' 2025-09-19 12:01:04.930773 | orchestrator | + echo 2025-09-19 12:01:04.930782 | orchestrator | + openstack volume service list 2025-09-19 12:01:07.366768 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 12:01:07.366856 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-09-19 12:01:07.366869 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 12:01:07.366879 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-19T12:00:58.000000 | 2025-09-19 12:01:07.366889 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-19T12:00:59.000000 | 2025-09-19 12:01:07.366899 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-19T12:00:59.000000 | 2025-09-19 12:01:07.366909 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-09-19T12:01:05.000000 | 2025-09-19 12:01:07.366919 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-09-19T12:01:05.000000 | 2025-09-19 12:01:07.366928 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-09-19T12:01:01.000000 | 2025-09-19 12:01:07.366938 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-09-19T12:00:57.000000 | 2025-09-19 12:01:07.366964 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-09-19T12:00:57.000000 | 2025-09-19 12:01:07.366975 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-09-19T12:00:58.000000 | 2025-09-19 12:01:07.366985 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 12:01:07.522525 | orchestrator | 2025-09-19 12:01:07.522601 | orchestrator | # Neutron 2025-09-19 12:01:07.522616 | orchestrator | 2025-09-19 12:01:07.522628 | orchestrator | + echo 2025-09-19 12:01:07.522639 | orchestrator | + echo '# Neutron' 2025-09-19 12:01:07.522651 | orchestrator | + echo 2025-09-19 12:01:07.522662 | orchestrator | + openstack network agent list 2025-09-19 12:01:10.079500 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 12:01:10.079604 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-09-19 12:01:10.079619 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 12:01:10.079630 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-09-19 12:01:10.079669 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-09-19 12:01:10.079680 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-09-19 12:01:10.079691 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-09-19 12:01:10.079702 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-09-19 12:01:10.079713 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-09-19 12:01:10.079724 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 12:01:10.079735 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 12:01:10.079746 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 12:01:10.079757 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 12:01:10.354277 | orchestrator | + openstack network service provider list 2025-09-19 12:01:13.459681 | orchestrator | +---------------+------+---------+ 2025-09-19 12:01:13.459789 | orchestrator | | Service Type | Name | Default | 2025-09-19 12:01:13.459805 | orchestrator | +---------------+------+---------+ 2025-09-19 12:01:13.459817 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-09-19 12:01:13.459827 | orchestrator | +---------------+------+---------+ 2025-09-19 12:01:13.721032 | orchestrator | 2025-09-19 12:01:13.721185 | orchestrator | # Nova 2025-09-19 12:01:13.721201 | orchestrator | 2025-09-19 12:01:13.721210 | orchestrator | + echo 2025-09-19 12:01:13.721219 | orchestrator | + echo '# Nova' 2025-09-19 12:01:13.721228 | orchestrator | + echo 2025-09-19 12:01:13.721237 | orchestrator | + openstack compute service list 2025-09-19 12:01:16.598101 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 12:01:16.598234 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-09-19 12:01:16.598244 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 12:01:16.598251 | orchestrator | | 07aac8a6-194e-4ed4-aa68-95ae86262780 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-19T12:01:09.000000 | 2025-09-19 12:01:16.598257 | orchestrator | | 9e93bc33-9826-417f-bbe8-0fe0d6bdada4 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-19T12:01:16.000000 | 2025-09-19 12:01:16.598263 | orchestrator | | 0e504fb4-b433-4655-be7e-351b999d8f88 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-19T12:01:08.000000 | 2025-09-19 12:01:16.598268 | orchestrator | | 3c262391-4ee3-4c49-b342-16f7ff84b920 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-09-19T12:01:06.000000 | 2025-09-19 12:01:16.598275 | orchestrator | | 4b158d25-c879-444d-8c7f-62747d72b149 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-09-19T12:01:08.000000 | 2025-09-19 12:01:16.598280 | orchestrator | | f9cc5041-7bf9-4989-96f8-81db0d73f955 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-09-19T12:01:10.000000 | 2025-09-19 12:01:16.599072 | orchestrator | | 67a919e2-f6bf-46e3-8cf4-d34288d28709 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-09-19T12:01:08.000000 | 2025-09-19 12:01:16.599124 | orchestrator | | 60f7961d-306a-4c59-826e-fd8c0fd0be1f | nova-compute | testbed-node-4 | nova | enabled | up | 2025-09-19T12:01:08.000000 | 2025-09-19 12:01:16.599174 | orchestrator | | f225eb3e-a499-45d4-879f-560825c51d13 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-09-19T12:01:09.000000 | 2025-09-19 12:01:16.599182 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 12:01:16.857825 | orchestrator | + openstack hypervisor list 2025-09-19 12:01:21.238628 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 12:01:21.238758 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-09-19 12:01:21.238784 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 12:01:21.238803 | orchestrator | | 9b417353-b19a-4d13-8de9-3aee228a0cd8 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-09-19 12:01:21.238822 | orchestrator | | 0c47b7f4-3665-4721-b56f-1783b5204edd | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-09-19 12:01:21.238840 | orchestrator | | c17fc243-1ed1-4e1a-a549-f63cfad2e851 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-09-19 12:01:21.238859 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 12:01:21.513741 | orchestrator | 2025-09-19 12:01:21.513840 | orchestrator | # Run OpenStack test play 2025-09-19 12:01:21.513856 | orchestrator | 2025-09-19 12:01:21.513868 | orchestrator | + echo 2025-09-19 12:01:21.513880 | orchestrator | + echo '# Run OpenStack test play' 2025-09-19 12:01:21.513892 | orchestrator | + echo 2025-09-19 12:01:21.513904 | orchestrator | + osism apply --environment openstack test 2025-09-19 12:01:23.329925 | orchestrator | 2025-09-19 12:01:23 | INFO  | Trying to run play test in environment openstack 2025-09-19 12:01:23.393108 | orchestrator | 2025-09-19 12:01:23 | INFO  | Task c28639f5-8d88-4190-aec5-055c3a384fed (test) was prepared for execution. 2025-09-19 12:01:23.393242 | orchestrator | 2025-09-19 12:01:23 | INFO  | It takes a moment until task c28639f5-8d88-4190-aec5-055c3a384fed (test) has been started and output is visible here. 2025-09-19 12:03:04.801213 | orchestrator | 2025-09-19 12:03:04 | INFO  | Trying to run play test in environment openstack 2025-09-19 12:03:04.803140 | orchestrator | 2025-09-19 12:03:04 | INFO  | Task e6b60c42-0494-46bd-af65-ce2203287c9f (test) was prepared for execution. 2025-09-19 12:03:04.803189 | orchestrator | 2025-09-19 12:03:04 | INFO  | It takes a moment until task e6b60c42-0494-46bd-af65-ce2203287c9f (test) has been started and output is visible here. 2025-09-19 12:04:03.740765 | orchestrator | 2025-09-19 12:04:03.740867 | orchestrator | PLAY [Create test project] ***************************************************** 2025-09-19 12:04:03.740883 | orchestrator | 2025-09-19 12:04:03.740895 | orchestrator | TASK [Create test domain] ****************************************************** 2025-09-19 12:04:03.740907 | orchestrator | Friday 19 September 2025 12:01:27 +0000 (0:00:00.086) 0:00:00.086 ****** 2025-09-19 12:04:03.740918 | orchestrator | changed: [localhost] 2025-09-19 12:04:03.740930 | orchestrator | 2025-09-19 12:04:03.740941 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-09-19 12:04:03.740953 | orchestrator | Friday 19 September 2025 12:01:30 +0000 (0:00:03.651) 0:00:03.738 ****** 2025-09-19 12:04:03.740963 | orchestrator | changed: [localhost] 2025-09-19 12:04:03.740974 | orchestrator | 2025-09-19 12:04:03.740985 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-09-19 12:04:03.740996 | orchestrator | Friday 19 September 2025 12:01:35 +0000 (0:00:04.172) 0:00:07.910 ****** 2025-09-19 12:04:03.741007 | orchestrator | changed: [localhost] 2025-09-19 12:04:03.741018 | orchestrator | 2025-09-19 12:04:03.741101 | orchestrator | TASK [Create test project] ***************************************************** 2025-09-19 12:04:03.741113 | orchestrator | Friday 19 September 2025 12:01:41 +0000 (0:00:06.213) 0:00:14.124 ****** 2025-09-19 12:04:03.741125 | orchestrator | changed: [localhost] 2025-09-19 12:04:03.741136 | orchestrator | 2025-09-19 12:04:03.741193 | orchestrator | TASK [Create test user] ******************************************************** 2025-09-19 12:04:03.741206 | orchestrator | Friday 19 September 2025 12:01:45 +0000 (0:00:03.972) 0:00:18.096 ****** 2025-09-19 12:04:03.741217 | orchestrator | changed: [localhost] 2025-09-19 12:04:03.741228 | orchestrator | 2025-09-19 12:04:03.741239 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-09-19 12:04:03.741250 | orchestrator | Friday 19 September 2025 12:01:49 +0000 (0:00:04.151) 0:00:22.248 ****** 2025-09-19 12:04:03.741261 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-09-19 12:04:03.741273 | orchestrator | changed: [localhost] => (item=member) 2025-09-19 12:04:03.741284 | orchestrator | changed: [localhost] => (item=creator) 2025-09-19 12:04:03.741296 | orchestrator | 2025-09-19 12:04:03.741307 | orchestrator | TASK [Create test server group] ************************************************ 2025-09-19 12:04:03.741324 | orchestrator | Friday 19 September 2025 12:02:01 +0000 (0:00:11.877) 0:00:34.125 ****** 2025-09-19 12:04:03.741343 | orchestrator | changed: [localhost] 2025-09-19 12:04:03.741364 | orchestrator | 2025-09-19 12:04:03.741392 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-09-19 12:04:03.741412 | orchestrator | Friday 19 September 2025 12:02:06 +0000 (0:00:04.904) 0:00:39.030 ****** 2025-09-19 12:04:03.741430 | orchestrator | changed: [localhost] 2025-09-19 12:04:03.741449 | orchestrator | 2025-09-19 12:04:03.741469 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-09-19 12:04:03.741487 | orchestrator | Friday 19 September 2025 12:02:10 +0000 (0:00:04.731) 0:00:43.761 ****** 2025-09-19 12:04:03.741504 | orchestrator | changed: [localhost] 2025-09-19 12:04:03.741515 | orchestrator | 2025-09-19 12:04:03.741526 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-09-19 12:04:03.741537 | orchestrator | Friday 19 September 2025 12:02:15 +0000 (0:00:04.279) 0:00:48.041 ****** 2025-09-19 12:04:03.741548 | orchestrator | changed: [localhost] 2025-09-19 12:04:03.741559 | orchestrator | 2025-09-19 12:04:03.741570 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-09-19 12:04:03.741581 | orchestrator | Friday 19 September 2025 12:02:18 +0000 (0:00:03.810) 0:00:51.852 ****** 2025-09-19 12:04:03.741591 | orchestrator | changed: [localhost] 2025-09-19 12:04:03.741602 | orchestrator | 2025-09-19 12:04:03.741613 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-09-19 12:04:03.741624 | orchestrator | Friday 19 September 2025 12:02:23 +0000 (0:00:04.063) 0:00:55.915 ****** 2025-09-19 12:04:03.741635 | orchestrator | changed: [localhost] 2025-09-19 12:04:03.741645 | orchestrator | 2025-09-19 12:04:03.741656 | orchestrator | TASK [Create test network topology] ******************************************** 2025-09-19 12:04:03.741667 | orchestrator | Friday 19 September 2025 12:02:26 +0000 (0:00:03.816) 0:00:59.732 ****** 2025-09-19 12:04:03.741678 | orchestrator | changed: [localhost] 2025-09-19 12:04:03.741688 | orchestrator | 2025-09-19 12:04:03.741699 | orchestrator | TASK [Create test instances] *************************************************** 2025-09-19 12:04:03.741710 | orchestrator | Friday 19 September 2025 12:02:43 +0000 (0:00:17.062) 0:01:16.795 ****** 2025-09-19 12:04:03.741722 | orchestrator | failed: [localhost] (item=test) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 12:04:03.741748 | orchestrator | failed: [localhost] (item=test-1) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-1", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 12:04:03.741759 | orchestrator | failed: [localhost] (item=test-2) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-2", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 12:04:03.741770 | orchestrator | failed: [localhost] (item=test-3) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-3", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 12:04:03.741792 | orchestrator | failed: [localhost] (item=test-4) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-4", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 12:04:03.741803 | orchestrator | 2025-09-19 12:04:03.741814 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 12:04:03.741843 | orchestrator | localhost : ok=13  changed=13  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-19 12:04:03.741855 | orchestrator | 2025-09-19 12:04:03.741866 | orchestrator | 2025-09-19 12:04:03.741877 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 12:04:03.741888 | orchestrator | Friday 19 September 2025 12:03:04 +0000 (0:00:20.646) 0:01:37.441 ****** 2025-09-19 12:04:03.741898 | orchestrator | =============================================================================== 2025-09-19 12:04:03.741909 | orchestrator | Create test instances -------------------------------------------------- 20.65s 2025-09-19 12:04:03.741920 | orchestrator | Create test network topology ------------------------------------------- 17.06s 2025-09-19 12:04:03.741930 | orchestrator | Add member roles to user test ------------------------------------------ 11.88s 2025-09-19 12:04:03.742011 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.21s 2025-09-19 12:04:03.742150 | orchestrator | Create test server group ------------------------------------------------ 4.90s 2025-09-19 12:04:03.742163 | orchestrator | Create ssh security group ----------------------------------------------- 4.73s 2025-09-19 12:04:03.742174 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.28s 2025-09-19 12:04:03.742184 | orchestrator | Create test-admin user -------------------------------------------------- 4.17s 2025-09-19 12:04:03.742199 | orchestrator | Create test user -------------------------------------------------------- 4.15s 2025-09-19 12:04:03.742210 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.06s 2025-09-19 12:04:03.742221 | orchestrator | Create test project ----------------------------------------------------- 3.97s 2025-09-19 12:04:03.742231 | orchestrator | Create test keypair ----------------------------------------------------- 3.82s 2025-09-19 12:04:03.742242 | orchestrator | Create icmp security group ---------------------------------------------- 3.81s 2025-09-19 12:04:03.742253 | orchestrator | Create test domain ------------------------------------------------------ 3.65s 2025-09-19 12:04:03.742263 | orchestrator | 2025-09-19 12:04:03.742274 | orchestrator | PLAY [Create test project] ***************************************************** 2025-09-19 12:04:03.742285 | orchestrator | 2025-09-19 12:04:03.742308 | orchestrator | TASK [Create test domain] ****************************************************** 2025-09-19 12:04:03.742319 | orchestrator | Friday 19 September 2025 12:03:08 +0000 (0:00:00.076) 0:00:00.076 ****** 2025-09-19 12:04:03.742330 | orchestrator | ok: [localhost] 2025-09-19 12:04:03.742341 | orchestrator | 2025-09-19 12:04:03.742352 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-09-19 12:04:03.742363 | orchestrator | Friday 19 September 2025 12:03:12 +0000 (0:00:03.828) 0:00:03.905 ****** 2025-09-19 12:04:03.742374 | orchestrator | ok: [localhost] 2025-09-19 12:04:03.742384 | orchestrator | 2025-09-19 12:04:03.742395 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-09-19 12:04:03.742410 | orchestrator | Friday 19 September 2025 12:03:16 +0000 (0:00:03.684) 0:00:07.589 ****** 2025-09-19 12:04:03.742421 | orchestrator | changed: [localhost] 2025-09-19 12:04:03.742432 | orchestrator | 2025-09-19 12:04:03.742443 | orchestrator | TASK [Create test project] ***************************************************** 2025-09-19 12:04:03.742453 | orchestrator | Friday 19 September 2025 12:03:22 +0000 (0:00:06.338) 0:00:13.928 ****** 2025-09-19 12:04:03.742464 | orchestrator | ok: [localhost] 2025-09-19 12:04:03.742475 | orchestrator | 2025-09-19 12:04:03.742485 | orchestrator | TASK [Create test user] ******************************************************** 2025-09-19 12:04:03.742506 | orchestrator | Friday 19 September 2025 12:03:26 +0000 (0:00:03.651) 0:00:17.579 ****** 2025-09-19 12:04:03.742517 | orchestrator | ok: [localhost] 2025-09-19 12:04:03.742528 | orchestrator | 2025-09-19 12:04:03.742538 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-09-19 12:04:03.742549 | orchestrator | Friday 19 September 2025 12:03:29 +0000 (0:00:03.645) 0:00:21.225 ****** 2025-09-19 12:04:03.742560 | orchestrator | ok: [localhost] => (item=load-balancer_member) 2025-09-19 12:04:03.742571 | orchestrator | ok: [localhost] => (item=member) 2025-09-19 12:04:03.742582 | orchestrator | ok: [localhost] => (item=creator) 2025-09-19 12:04:03.742593 | orchestrator | 2025-09-19 12:04:03.742604 | orchestrator | TASK [Create test server group] ************************************************ 2025-09-19 12:04:03.742615 | orchestrator | Friday 19 September 2025 12:03:40 +0000 (0:00:11.113) 0:00:32.339 ****** 2025-09-19 12:04:03.742626 | orchestrator | ok: [localhost] 2025-09-19 12:04:03.742636 | orchestrator | 2025-09-19 12:04:03.742647 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-09-19 12:04:03.742658 | orchestrator | Friday 19 September 2025 12:03:45 +0000 (0:00:04.316) 0:00:36.655 ****** 2025-09-19 12:04:03.742668 | orchestrator | ok: [localhost] 2025-09-19 12:04:03.742679 | orchestrator | 2025-09-19 12:04:03.742690 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-09-19 12:04:03.742700 | orchestrator | Friday 19 September 2025 12:03:49 +0000 (0:00:03.855) 0:00:40.510 ****** 2025-09-19 12:04:03.742711 | orchestrator | ok: [localhost] 2025-09-19 12:04:03.742722 | orchestrator | 2025-09-19 12:04:03.742732 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-09-19 12:04:03.742743 | orchestrator | Friday 19 September 2025 12:03:53 +0000 (0:00:03.921) 0:00:44.431 ****** 2025-09-19 12:04:03.742754 | orchestrator | ok: [localhost] 2025-09-19 12:04:03.742765 | orchestrator | 2025-09-19 12:04:03.742776 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-09-19 12:04:03.742786 | orchestrator | Friday 19 September 2025 12:03:56 +0000 (0:00:03.426) 0:00:47.858 ****** 2025-09-19 12:04:03.742797 | orchestrator | ok: [localhost] 2025-09-19 12:04:03.742807 | orchestrator | 2025-09-19 12:04:03.742818 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-09-19 12:04:03.742829 | orchestrator | Friday 19 September 2025 12:04:00 +0000 (0:00:03.680) 0:00:51.539 ****** 2025-09-19 12:04:03.742840 | orchestrator | ok: [localhost] 2025-09-19 12:04:03.742850 | orchestrator | 2025-09-19 12:04:03.742861 | orchestrator | TASK [Create test network topology] ******************************************** 2025-09-19 12:04:03.742882 | orchestrator | Friday 19 September 2025 12:04:03 +0000 (0:00:03.579) 0:00:55.118 ****** 2025-09-19 12:04:29.922353 | orchestrator | changed: [localhost] 2025-09-19 12:04:29.922490 | orchestrator | 2025-09-19 12:04:29.922517 | orchestrator | TASK [Create test instances] *************************************************** 2025-09-19 12:04:29.922538 | orchestrator | Friday 19 September 2025 12:04:09 +0000 (0:00:05.955) 0:01:01.073 ****** 2025-09-19 12:04:29.922559 | orchestrator | failed: [localhost] (item=test) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 12:04:29.922582 | orchestrator | failed: [localhost] (item=test-1) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-1", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 12:04:29.922600 | orchestrator | failed: [localhost] (item=test-2) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-2", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 12:04:29.922620 | orchestrator | failed: [localhost] (item=test-3) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-3", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 12:04:29.922637 | orchestrator | failed: [localhost] (item=test-4) => {"ansible_loop_var": "item", "changed": false, "extra_data": {"data": null, "details": null, "response": "None"}, "item": "test-4", "msg": "No Flavor found for SCS-1L-1-5"} 2025-09-19 12:04:29.922694 | orchestrator | 2025-09-19 12:04:29.922711 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 12:04:29.922723 | orchestrator | localhost : ok=13  changed=2  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-19 12:04:29.922735 | orchestrator | 2025-09-19 12:04:29.922746 | orchestrator | 2025-09-19 12:04:29.922757 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 12:04:29.922768 | orchestrator | Friday 19 September 2025 12:04:29 +0000 (0:00:20.025) 0:01:21.099 ****** 2025-09-19 12:04:29.922779 | orchestrator | =============================================================================== 2025-09-19 12:04:29.922789 | orchestrator | Create test instances -------------------------------------------------- 20.03s 2025-09-19 12:04:29.922800 | orchestrator | Add member roles to user test ------------------------------------------ 11.11s 2025-09-19 12:04:29.922826 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.34s 2025-09-19 12:04:29.922838 | orchestrator | Create test network topology -------------------------------------------- 5.96s 2025-09-19 12:04:29.922849 | orchestrator | Create test server group ------------------------------------------------ 4.32s 2025-09-19 12:04:29.922859 | orchestrator | Add rule to ssh security group ------------------------------------------ 3.92s 2025-09-19 12:04:29.922870 | orchestrator | Create ssh security group ----------------------------------------------- 3.86s 2025-09-19 12:04:29.922882 | orchestrator | Create test domain ------------------------------------------------------ 3.83s 2025-09-19 12:04:29.922895 | orchestrator | Create test-admin user -------------------------------------------------- 3.68s 2025-09-19 12:04:29.922907 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.68s 2025-09-19 12:04:29.922919 | orchestrator | Create test project ----------------------------------------------------- 3.65s 2025-09-19 12:04:29.922930 | orchestrator | Create test user -------------------------------------------------------- 3.65s 2025-09-19 12:04:29.922943 | orchestrator | Create test keypair ----------------------------------------------------- 3.58s 2025-09-19 12:04:29.922955 | orchestrator | Create icmp security group ---------------------------------------------- 3.43s 2025-09-19 12:04:30.476425 | orchestrator | ERROR 2025-09-19 12:04:30.476920 | orchestrator | { 2025-09-19 12:04:30.477029 | orchestrator | "delta": "0:08:32.322015", 2025-09-19 12:04:30.477101 | orchestrator | "end": "2025-09-19 12:04:30.197354", 2025-09-19 12:04:30.477160 | orchestrator | "msg": "non-zero return code", 2025-09-19 12:04:30.477216 | orchestrator | "rc": 2, 2025-09-19 12:04:30.477271 | orchestrator | "start": "2025-09-19 11:55:57.875339" 2025-09-19 12:04:30.477323 | orchestrator | } failure 2025-09-19 12:04:30.526131 | 2025-09-19 12:04:30.526244 | PLAY RECAP 2025-09-19 12:04:30.526303 | orchestrator | ok: 23 changed: 10 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-19 12:04:30.526339 | 2025-09-19 12:04:30.729655 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-19 12:04:30.730744 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-19 12:04:31.608911 | 2025-09-19 12:04:31.609079 | PLAY [Post output play] 2025-09-19 12:04:31.624922 | 2025-09-19 12:04:31.625058 | LOOP [stage-output : Register sources] 2025-09-19 12:04:31.695456 | 2025-09-19 12:04:31.695821 | TASK [stage-output : Check sudo] 2025-09-19 12:04:32.579025 | orchestrator | sudo: a password is required 2025-09-19 12:04:32.734590 | orchestrator | ok: Runtime: 0:00:00.073768 2025-09-19 12:04:32.749934 | 2025-09-19 12:04:32.750102 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-19 12:04:32.786393 | 2025-09-19 12:04:32.786658 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-19 12:04:32.853328 | orchestrator | ok 2025-09-19 12:04:32.861999 | 2025-09-19 12:04:32.862126 | LOOP [stage-output : Ensure target folders exist] 2025-09-19 12:04:33.317749 | orchestrator | ok: "docs" 2025-09-19 12:04:33.318131 | 2025-09-19 12:04:33.547943 | orchestrator | ok: "artifacts" 2025-09-19 12:04:33.787173 | orchestrator | ok: "logs" 2025-09-19 12:04:33.802536 | 2025-09-19 12:04:33.802713 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-19 12:04:33.837309 | 2025-09-19 12:04:33.837560 | TASK [stage-output : Make all log files readable] 2025-09-19 12:04:34.128198 | orchestrator | ok 2025-09-19 12:04:34.137001 | 2025-09-19 12:04:34.137107 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-19 12:04:34.171243 | orchestrator | skipping: Conditional result was False 2025-09-19 12:04:34.186794 | 2025-09-19 12:04:34.186964 | TASK [stage-output : Discover log files for compression] 2025-09-19 12:04:34.211220 | orchestrator | skipping: Conditional result was False 2025-09-19 12:04:34.226687 | 2025-09-19 12:04:34.226824 | LOOP [stage-output : Archive everything from logs] 2025-09-19 12:04:34.273062 | 2025-09-19 12:04:34.273220 | PLAY [Post cleanup play] 2025-09-19 12:04:34.281476 | 2025-09-19 12:04:34.281559 | TASK [Set cloud fact (Zuul deployment)] 2025-09-19 12:04:34.335490 | orchestrator | ok 2025-09-19 12:04:34.346716 | 2025-09-19 12:04:34.346819 | TASK [Set cloud fact (local deployment)] 2025-09-19 12:04:34.380487 | orchestrator | skipping: Conditional result was False 2025-09-19 12:04:34.395459 | 2025-09-19 12:04:34.395582 | TASK [Clean the cloud environment] 2025-09-19 12:04:35.090632 | orchestrator | 2025-09-19 12:04:35 - clean up servers 2025-09-19 12:04:35.851552 | orchestrator | 2025-09-19 12:04:35 - testbed-manager 2025-09-19 12:04:35.939278 | orchestrator | 2025-09-19 12:04:35 - testbed-node-4 2025-09-19 12:04:36.035590 | orchestrator | 2025-09-19 12:04:36 - testbed-node-1 2025-09-19 12:04:36.119442 | orchestrator | 2025-09-19 12:04:36 - testbed-node-3 2025-09-19 12:04:36.212687 | orchestrator | 2025-09-19 12:04:36 - testbed-node-2 2025-09-19 12:04:36.309733 | orchestrator | 2025-09-19 12:04:36 - testbed-node-0 2025-09-19 12:04:36.414534 | orchestrator | 2025-09-19 12:04:36 - testbed-node-5 2025-09-19 12:04:36.525670 | orchestrator | 2025-09-19 12:04:36 - clean up keypairs 2025-09-19 12:04:36.545596 | orchestrator | 2025-09-19 12:04:36 - testbed 2025-09-19 12:04:36.585905 | orchestrator | 2025-09-19 12:04:36 - wait for servers to be gone 2025-09-19 12:04:47.457207 | orchestrator | 2025-09-19 12:04:47 - clean up ports 2025-09-19 12:04:47.651112 | orchestrator | 2025-09-19 12:04:47 - 0816f492-d588-41f1-83e2-52dd736f7cca 2025-09-19 12:04:47.929517 | orchestrator | 2025-09-19 12:04:47 - 2844f12d-9df8-4d63-bd18-c65423acdd5e 2025-09-19 12:04:48.218161 | orchestrator | 2025-09-19 12:04:48 - 2dd4c1bf-a6a2-4b82-8974-7d14e14fbf17 2025-09-19 12:04:48.488723 | orchestrator | 2025-09-19 12:04:48 - 4e4786d2-8578-4737-8174-8097613a5382 2025-09-19 12:04:48.731061 | orchestrator | 2025-09-19 12:04:48 - 620ed29f-5ab0-48d2-aaa8-567386b78b69 2025-09-19 12:04:49.262840 | orchestrator | 2025-09-19 12:04:49 - c950ba4a-7941-4085-ad27-1cc4cd27435a 2025-09-19 12:04:49.515449 | orchestrator | 2025-09-19 12:04:49 - efe6dde7-c375-4a31-b3e3-3f638c0ba0db 2025-09-19 12:04:50.206124 | orchestrator | 2025-09-19 12:04:50 - clean up volumes 2025-09-19 12:04:50.338683 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-3-node-base 2025-09-19 12:04:50.383365 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-manager-base 2025-09-19 12:04:50.422799 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-2-node-base 2025-09-19 12:04:50.464890 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-5-node-base 2025-09-19 12:04:50.505592 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-1-node-base 2025-09-19 12:04:50.551912 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-4-node-base 2025-09-19 12:04:50.595562 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-0-node-base 2025-09-19 12:04:50.635575 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-4-node-4 2025-09-19 12:04:50.676810 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-7-node-4 2025-09-19 12:04:50.719706 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-6-node-3 2025-09-19 12:04:50.756304 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-1-node-4 2025-09-19 12:04:50.802167 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-5-node-5 2025-09-19 12:04:50.842901 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-3-node-3 2025-09-19 12:04:50.884660 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-8-node-5 2025-09-19 12:04:50.927210 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-0-node-3 2025-09-19 12:04:50.968845 | orchestrator | 2025-09-19 12:04:50 - testbed-volume-2-node-5 2025-09-19 12:04:51.029552 | orchestrator | 2025-09-19 12:04:51 - disconnect routers 2025-09-19 12:04:51.115349 | orchestrator | 2025-09-19 12:04:51 - testbed 2025-09-19 12:04:52.735369 | orchestrator | 2025-09-19 12:04:52 - clean up subnets 2025-09-19 12:04:52.798715 | orchestrator | 2025-09-19 12:04:52 - subnet-testbed-management 2025-09-19 12:04:52.970468 | orchestrator | 2025-09-19 12:04:52 - clean up networks 2025-09-19 12:04:53.104537 | orchestrator | 2025-09-19 12:04:53 - net-testbed-management 2025-09-19 12:04:53.469936 | orchestrator | 2025-09-19 12:04:53 - clean up security groups 2025-09-19 12:04:53.519215 | orchestrator | 2025-09-19 12:04:53 - testbed-management 2025-09-19 12:04:53.637581 | orchestrator | 2025-09-19 12:04:53 - testbed-node 2025-09-19 12:04:53.774242 | orchestrator | 2025-09-19 12:04:53 - clean up floating ips 2025-09-19 12:04:53.814008 | orchestrator | 2025-09-19 12:04:53 - 81.163.193.246 2025-09-19 12:04:54.149562 | orchestrator | 2025-09-19 12:04:54 - clean up routers 2025-09-19 12:04:54.267559 | orchestrator | 2025-09-19 12:04:54 - testbed 2025-09-19 12:04:55.454220 | orchestrator | ok: Runtime: 0:00:20.535777 2025-09-19 12:04:55.458643 | 2025-09-19 12:04:55.458882 | PLAY RECAP 2025-09-19 12:04:55.459029 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-19 12:04:55.459110 | 2025-09-19 12:04:55.559099 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-19 12:04:55.561841 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-19 12:04:56.210221 | 2025-09-19 12:04:56.210341 | PLAY [Cleanup play] 2025-09-19 12:04:56.224211 | 2025-09-19 12:04:56.224311 | TASK [Set cloud fact (Zuul deployment)] 2025-09-19 12:04:56.277948 | orchestrator | ok 2025-09-19 12:04:56.286639 | 2025-09-19 12:04:56.286769 | TASK [Set cloud fact (local deployment)] 2025-09-19 12:04:56.319985 | orchestrator | skipping: Conditional result was False 2025-09-19 12:04:56.333403 | 2025-09-19 12:04:56.333517 | TASK [Clean the cloud environment] 2025-09-19 12:04:57.476955 | orchestrator | 2025-09-19 12:04:57 - clean up servers 2025-09-19 12:04:57.958395 | orchestrator | 2025-09-19 12:04:57 - clean up keypairs 2025-09-19 12:04:57.972394 | orchestrator | 2025-09-19 12:04:57 - wait for servers to be gone 2025-09-19 12:04:58.011635 | orchestrator | 2025-09-19 12:04:58 - clean up ports 2025-09-19 12:04:58.090144 | orchestrator | 2025-09-19 12:04:58 - clean up volumes 2025-09-19 12:04:58.163652 | orchestrator | 2025-09-19 12:04:58 - disconnect routers 2025-09-19 12:04:58.190922 | orchestrator | 2025-09-19 12:04:58 - clean up subnets 2025-09-19 12:04:58.215004 | orchestrator | 2025-09-19 12:04:58 - clean up networks 2025-09-19 12:04:58.337548 | orchestrator | 2025-09-19 12:04:58 - clean up security groups 2025-09-19 12:04:58.372797 | orchestrator | 2025-09-19 12:04:58 - clean up floating ips 2025-09-19 12:04:58.397910 | orchestrator | 2025-09-19 12:04:58 - clean up routers 2025-09-19 12:04:58.868973 | orchestrator | ok: Runtime: 0:00:01.319401 2025-09-19 12:04:58.870635 | 2025-09-19 12:04:58.870742 | PLAY RECAP 2025-09-19 12:04:58.870799 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-19 12:04:58.870824 | 2025-09-19 12:04:58.996270 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-19 12:04:58.997285 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-19 12:04:59.719619 | 2025-09-19 12:04:59.719822 | PLAY [Base post-fetch] 2025-09-19 12:04:59.735353 | 2025-09-19 12:04:59.735532 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-19 12:04:59.791219 | orchestrator | skipping: Conditional result was False 2025-09-19 12:04:59.803984 | 2025-09-19 12:04:59.804165 | TASK [fetch-output : Set log path for single node] 2025-09-19 12:04:59.859265 | orchestrator | ok 2025-09-19 12:04:59.866542 | 2025-09-19 12:04:59.866664 | LOOP [fetch-output : Ensure local output dirs] 2025-09-19 12:05:00.366503 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/b1291048054043b3b0db75d23259f197/work/logs" 2025-09-19 12:05:00.640106 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b1291048054043b3b0db75d23259f197/work/artifacts" 2025-09-19 12:05:00.909135 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b1291048054043b3b0db75d23259f197/work/docs" 2025-09-19 12:05:00.927763 | 2025-09-19 12:05:00.927906 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-19 12:05:01.861086 | orchestrator | changed: .d..t...... ./ 2025-09-19 12:05:01.861413 | orchestrator | changed: All items complete 2025-09-19 12:05:01.861464 | 2025-09-19 12:05:02.614453 | orchestrator | changed: .d..t...... ./ 2025-09-19 12:05:03.368647 | orchestrator | changed: .d..t...... ./ 2025-09-19 12:05:03.391630 | 2025-09-19 12:05:03.391790 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-19 12:05:03.912550 | orchestrator -> localhost | ok: Item: artifacts Runtime: 0:00:00.011815 2025-09-19 12:05:04.173536 | orchestrator -> localhost | ok: Item: docs Runtime: 0:00:00.010081 2025-09-19 12:05:04.194772 | 2025-09-19 12:05:04.194985 | PLAY RECAP 2025-09-19 12:05:04.195054 | orchestrator | ok: 4 changed: 3 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-19 12:05:04.195088 | 2025-09-19 12:05:04.317292 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-19 12:05:04.319776 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-19 12:05:05.024936 | 2025-09-19 12:05:05.025101 | PLAY [Base post] 2025-09-19 12:05:05.039939 | 2025-09-19 12:05:05.040081 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-19 12:05:06.118113 | orchestrator | changed 2025-09-19 12:05:06.125972 | 2025-09-19 12:05:06.126085 | PLAY RECAP 2025-09-19 12:05:06.126150 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-19 12:05:06.126211 | 2025-09-19 12:05:06.244000 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-19 12:05:06.245067 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-19 12:05:07.038189 | 2025-09-19 12:05:07.038352 | PLAY [Base post-logs] 2025-09-19 12:05:07.048906 | 2025-09-19 12:05:07.049040 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-19 12:05:07.491324 | localhost | changed 2025-09-19 12:05:07.509975 | 2025-09-19 12:05:07.510153 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-19 12:05:07.548632 | localhost | ok 2025-09-19 12:05:07.557099 | 2025-09-19 12:05:07.557304 | TASK [Set zuul-log-path fact] 2025-09-19 12:05:07.577883 | localhost | ok 2025-09-19 12:05:07.590920 | 2025-09-19 12:05:07.591062 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-19 12:05:07.617974 | localhost | ok 2025-09-19 12:05:07.623857 | 2025-09-19 12:05:07.624004 | TASK [upload-logs : Create log directories] 2025-09-19 12:05:08.108008 | localhost | changed 2025-09-19 12:05:08.110903 | 2025-09-19 12:05:08.111017 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-19 12:05:08.592025 | localhost -> localhost | ok: Runtime: 0:00:00.006661 2025-09-19 12:05:08.596343 | 2025-09-19 12:05:08.596461 | TASK [upload-logs : Upload logs to log server] 2025-09-19 12:05:09.180902 | localhost | Output suppressed because no_log was given 2025-09-19 12:05:09.185302 | 2025-09-19 12:05:09.185515 | LOOP [upload-logs : Compress console log and json output] 2025-09-19 12:05:09.242217 | localhost | skipping: Conditional result was False 2025-09-19 12:05:09.247391 | localhost | skipping: Conditional result was False 2025-09-19 12:05:09.259688 | 2025-09-19 12:05:09.259976 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-19 12:05:09.308305 | localhost | skipping: Conditional result was False 2025-09-19 12:05:09.309044 | 2025-09-19 12:05:09.312264 | localhost | skipping: Conditional result was False 2025-09-19 12:05:09.325341 | 2025-09-19 12:05:09.325583 | LOOP [upload-logs : Upload console log and json output]